LKS ASE logo

Guide to Sources for Finding Unpublished Research

Unpublished research.

  • Research Networks
  • Conference Proceedings
  • Clinical Research in Progress
  • Grey Literature
  • Institutional Repositories
  • Preprint Servers
  • Finding Theses

This guide takes you through the tools and resources for finding research in progress and unpublished research in Paramedicine. 

What do we mean by unpublished?  

Typically we mean anything that is publicly available on the internet that isn't published formally in a journal article or conference proceedings.  By their nature these  unpublications  are varied but might include things like: 

  • Preprints, work in progress or an early version of an article intended for publication that is made available for comment by interested researchers, 
  • Presentations, posters, conference papers  published on personal websites or research networks like  ResearchGate  or  Mendeley ,
  • Theses and dissertations  published on the web or through repositories.

Unpublished research can be harder to find a number of reasons.  There is no one place to look. You have to dig a little deeper.  The tools you can use o do this are covered in this  Guide . Also, there isn't that much of it.  There are a number of reasons for this. Paramedic researchers are relatively few and widely dispersed geographically and across different organizations (academic and EMS/Ambulance Services).  Compared to similar areas Paramedic research is in the early stages of development.  To use an analogy, Paramedic research is till taxing up the runway while other areas are already up and flying. It's not impossible; it's just harder than in more established research areas.

Why would you want to look?

If you are wondering why you would want to search for  unpublished  material, there could be a number of reasons: 

  • Completeness,   if you need to cover a complete topic including work in progress and projects and ideas that haven't made it to formal publication,
  • Real- world  examples and case studies , not every project or every implementation will make it to formal publication but may be reported informally as a presentation, theses or dissertation,
  • Currency,   the lengthy publication process encourages researchers to find alternative routes to promote research in progress to share ideas and inform current practice. Typically this would be preprints but there are other informal methods such as copies of posters and presentations. 
  • Next: Research Networks >>
  • Last Updated: Nov 9, 2023 10:51 PM
  • URL: https://ambulance.libguides.com/unpublishedresearch
  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Cite an Unpublished Paper or Manuscript in APA Referencing

  • 3-minute read
  • 23rd June 2020

Did you know that you can cite unpublished works, such as in-progress research papers or manuscripts, in an essay? Well, you can! The key is citing them correctly. And in this post, we will look at how to cite an unpublished paper or manuscript in APA referencing .

How to Cite an Unpublished Paper in APA referencing

In APA referencing, you can cite an unpublished work in the same way as you would a published one. This means giving an author’s name and a date in brackets . The only difference is that you give a year of production (i.e., when the paper was written) rather than a year of publication:

Few fully understand the publication process (Clarke, 2020).

Like other sources, if you name the author in the text, you do not need to repeat it in the brackets. And if you quote an unpublished paper, you should give page numbers. For example:

According to Clarke (2020), publication “is a complex process” (p. 20).

When a paper has been accepted for publication but not yet published, however, you should use the term “in press” in place of a year in citations:

Few fully understand the publication process (Clarke, in press).

How to Reference an Unpublished Work in APA Referencing

When adding an unpublished paper to an APA reference list , the correct format will depend on where it is in the publication process. But let’s start with works that will not be published at all (e.g., a paper that the author never submitted or that the publisher rejected).

In this case, the correct format is:

Author Surname, Initial(s). (Year of Production). Title of manuscript [Unpublished manuscript]. Department, University Name.

So, in practice, we could cite an unpublished paper like this:

Clarke, J. (2020). The publication process explained [Unpublished manuscript]. School of Journalism, Media and Performance, University of Central Lancashire.

Referencing a Work Submitted for Publication

If a paper has been submitted for publication but not yet accepted, the reference should state “manuscript submitted for publication.” However, you should not include any other information about the submission, such as where it was submitted, as this information could go out of date quickly.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

The correct format in this case is therefore:

Author Surname, Initial(s). (Year of Production). Title of manuscript [Manuscript submitted for publication]. Department, University Name.

For example, we would list the paper above as follows:

Clarke, J. (2020). The publication process explained [Manuscript submitted for publication]. School of Journalism, Media and Performance, University of Central Lancashire.

Referencing a Paper in Press

If a paper has been accepted for publication, use the following format:

Author Surname, Initial(s). (in press). Title. Periodical or Journal Title .

As you can see, we now include both:

  • The phrase “in press” to show that the paper has been accepted by the journal and is now awaiting publication.
  • The title of the journal that accepted it (note, too, that we only use italics for the journal title here, not the title of the paper itself).

In practice, then, we would reference a paper awaiting publication like this:

Clarke, J. (in press). The publication process explained, Publishing Research Quarterly .

It is always worth checking the status of submitted papers before finalizing your reference list, too, as they can go from “submitted for publication” to “in press” quite suddenly, leaving your reference out of date.

Hopefully, you will now be able to cite an unpublished paper or manuscript correctly. But if you would like any further help with your writing, why not submit a document for proofreading ?

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

5-minute read

Free Email Newsletter Template

Promoting a brand means sharing valuable insights to connect more deeply with your audience, and...

6-minute read

How to Write a Nonprofit Grant Proposal

If you’re seeking funding to support your charitable endeavors as a nonprofit organization, you’ll need...

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

4-minute read

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • Library Home
  • Library Guides

Preemption Check Checklist

  • Step 5: Unpublished Materials
  • Getting Started
  • Step 1: Search Terms
  • Step 2: Law Articles
  • Step 3: Non-Law Articles
  • Step 4: Books
  • Step 6: Current Awareness Alerts
  • Additional Resources: International Law Topics

Step 5: Searching for Unpublished Articles

The publication process takes a long time—sometimes a year or more—so it's important to search for articles on your topic that have already been written but not yet published. SSRN and bepress are the best sources for unpublished articles and working papers:

  • Social Science Research Network (SSRN) This link opens in a new window Try starting with a broad keyword search, then use the "Search Within Results" search box to narrow your results. & more less... Disseminates abstracts and full text documents. For case studies, click on Search then select Title Only. Then follow the instructions above (the asterisk doesn't work in SSRN so you have to type "case study" or "case studies."
  • bepress Legal Repository This link opens in a new window Try both a keyword search and browsing by "Subject Areas" to find articles on your topic. & more less... The Berkeley Electronic Press hosts working papers from many law schools, including Yale, Berkeley, Michigan, and Virginia, and is especially useful for law and economics research. The searching is unreliable, but you can browse papers by topic.
  • Google Scholar Useful for finding conference papers and other grey literature that is not published in article databases.

Searching for Conferences & Workshops

Check the Legal Scholarship Blog for conferences, workshops, and calls for papers on your topic. You may find that a law journal is hosting an entire symposium on your topic, or a law professor is currently researching your topic. This will alert you to potential preemption issues that may crop up down the road.

  • Legal Scholarship Blog This link opens in a new window Use the search box (top right corner) or browse the subjects listed under "Categories" to find conferences related to your topic. & more less... Carries news of upcoming legal academic conferences.

Searching for Blog Posts

Search for law blog posts on your topic. While a blog post cannot preempt a scholarly article or paper on the same topic, it can help you identify scholars who are interested in your topic. You can then review their published and unpublished works, and set alerts to receive notification of their future publications.

  • Justia BlawgSearch Sort by date, instead of relevance, to see the most recent posts on your topic. If you get too many results with a keyword search, try browsing the subjects listed under "Categories."

Ask a Law Librarian

Profile Photo

  • << Previous: Step 4: Books
  • Next: Step 6: Current Awareness Alerts >>
  • Updated: Aug 21, 2024 11:25 AM
  • URL: https://guides.lib.uchicago.edu/preemption
  • Report a problem
  • Login to LibApps

Open sourcetools

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

How to cite my own submitted but not yet published work?

I plan to submit part of my current work to conference A. I then wish to submit my whole work to a more prestigious conference B. As for the part submitted to A, there is no point of repeating it again. So I will just cite my submission to A in my submission to B.

But the problem is that the submission deadlines for A and B are roughly the same. So actually the moment I submit the work to B, my partial work submitted to A has not been published yet. I have not even been notified of its acceptance.

Can I still cite it? My concern is that even if I can cite it, one will find nothing online.

  • publications
  • paper-submission

ff524's user avatar

  • 2 I definitely do not see the point of citing an unavailable paper. As you do not repeat the part of A that would also belong to B, the first thing you have to do is to make A available before submitting B. How would the referees do their work otherwise? After that, citing is an issue easily settled. –  Benoît Kloeckner Commented Aug 23, 2013 at 21:48
  • @BenoîtKloeckner But as I said, their deadlines are roughly the same. By no point, you mean the reviewers will have difficulties finding the paper? If that is the case, will arXiv fix that? –  Sibbs Gambling Commented Aug 24, 2013 at 1:25
  • 1 My point is indeed that the referee have to have access to the paper needed to read the paper under review. arXiv is a good solution, if it is ok with your conference, as suggested by some answers. –  Benoît Kloeckner Commented Aug 24, 2013 at 8:28

6 Answers 6

In principle you can cite other, submitted work in a research paper. Just give the authors, paper title, and either "Submitted." or "Submitted to [venue]." in the reference list.

However, both as a reviewer and reader, I usually find this disappointing. I already came across several cases where I wasn't able to find the cited paper even years after publication of the paper with the citation. It is well possible that the cited paper is rejected, and maybe someone just doesn't follow up to really get it published. As a better alternative, check whether you can put a preprint version of the paper you want to cite online (e.g. on arxiv), and just cite that.

silvado's user avatar

  • 3 But will putting it online say on arXiv affect the acceptance of the paper? I mean will the conference reject the paper since it is on arXiv. What's more, if the paper gets rejected, I may wish to refine it and re-submit to somewhere else. But if I put it onlione on arXiv, will anyone freely steal away my work? –  Sibbs Gambling Commented Aug 23, 2013 at 8:36
  • 4 @perfectionm1ng 1) Check with the conference whether they do accept papers that have been published as preprints. 2) If someone steals from an arxiv paper, its clearly plagiarism, and you may even prevent that someone publishes the same idea before you. –  silvado Commented Aug 23, 2013 at 10:00
  • 8 +1 for the suggestion to publish it on arxiv. Or just make it available on your website. In many parts of math, physics, and CS, most papers are published first this way. –  David Ketcheson Commented Aug 23, 2013 at 17:20
  • 1 @DavidKetcheson May I ask most papers are published first this way for what reason? I don't understand in the cases except mine, why would they do it. –  Sibbs Gambling Commented Aug 24, 2013 at 1:27
  • 6 Putting your work on the arXiv does not prevent other people to publish roughly the same thing, which sometimes happen on good faith, but it gives anteriority since the arXiv deposit is dated. –  Benoît Kloeckner Commented Aug 24, 2013 at 12:01

You are allowed to cite works in submission as part of your ongoing research; this is something I've had to do on a number of instances for publications I wrote both in graduate school and as a post-doc.

The key here is that you must cite the work only as "Submitted to Conference A" rather than a standard reference to a work published in the proceedings. You would then, if possible, provide the conference paper A as an appendix or supporting information for the referees.

aeismail's user avatar

Citing something that is not published will prevent reviewers from doing their job, so it's a big no-no if you want to improve your chances of being accepted. The best way to go is to be patient and submit to B next year, having had the chance to improve using the reviews from A.

If this is not at all possible, you may be able publish A right now as a technical report from your lab/department an cite it as such. You'll have to check the guidelines of both conferences, namely if A accepts material previously published as a TR (in CS at least this is very common) and if B accepts citing TRs (usually also true in CS as long as it is easily available online).

Most importantly, when citing from a non-refereed source like a TR, you have to be very prudent in the way you characterize the work. Remember that it was only accepted in your department as an interesting document, not properly validated using the scientific contribution standards of your community. If I read a claim that something was "proven", or "shown", or "demonstrated" by a tech report, I'll probably reject the paper.

In any case, do not just cite A unless it is tangential (and in that case, why cite it all?). If it's actually important, give it an overview in your B submission, sufficient for a reviewer to keep on reading.

user8346's user avatar

  • Just for clarification: you're saying that if I cite my older paper from my newer paper, then the reviewers of my newer paper will not be able to do their job of reviewing my newer paper by looking up references. Is that what you mean? –  jvriesem Commented Oct 5, 2019 at 16:40

I believe that there a few issues that need to be addressed in this situation:

  • You believe that part A is based on fundamentally sound methodology and the findings will be accepted within the community of your discipline.
  • Can you cite works in submission?
  • Works in submission are not available to the public.

Whether of not Part A is widely accepted you can site it as a work in submission, the answer to the second concern is: you can also cite it as an unpublished work. For proper format check with the manual of style for your discipline.

An example of an unpublished work not submitted for publication using APA Manual of Style: Lincoln, A. (1863). The principles of human equality .. Unpublished manuscript.

An example of a work in progress or submitted but not yet accepted using APA Manual of Style: Lincoln, A. (1863). Gettysburg Address: The principles of human equality .. Manuscript submitted for publication (copy on file with author).

As far as the third concern goes, I have reviewed numerous submissions to everything from small local up to international conferences and the equivalent array of professional publications and journals, personally I prefer that a brief description of the "Part A" methods and finding be given in a manuscript. However, when it comes to an abstract and space limitations a simple "previously we (I) found...; therefore, we furthered the body of knowledge with..." was always sufficient for my standards.

SteveK's user avatar

Agreed with @aeismail♦, I just find a solution that indicated in IEEE conference paper template as follows:

"Papers that have not been published, even if they have been submitted for publication, should be cited as “ unpublished ”.

e.g. K. Elissa, “Title of paper if known,” unpublished."

Eilia's user avatar

I usually don't like to have many public versions of the same paper. I prefer releasing papers on arXiv only after receiving reviews and addressing relevant comments.

To address this issue, a solution I have been thinking about is to share the preprint I want to cite privately, i.e. only accessible for people reviewing the submission. This could be done by protecting paper access with a password that is given in the citation: e.g., J. Guerin, “Title of the paper”, unpublished, available at "URL", password:XXX. The citation can then be fixed once the cited papers is actually released.

Anyone has some comments about why this might be a bad idea? I don't see any problem so far.

Joris Guerin's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged publications citations conference paper-submission ..

  • Featured on Meta
  • Bringing clarity to status tag usage on meta sites
  • Announcing a change to the data-dump process

Hot Network Questions

  • How to remove and move several puzzle pieces?
  • Pull up resistor question
  • Instance a different sets of geometry on different parts of mesh by index
  • Are others allowed to use my copyrighted figures in theses, without asking?
  • How would you read this time change with the given note equivalence?
  • Could a lawyer agree not to take any further cases against a company?
  • What is the optimal number of function evaluations?
  • Does death entering into the world through the original sin mean animals were also created eternal?
  • Replacing jockey wheels on Shimano Deore rear derailleur
  • How does the phrase "a longe" meaning "from far away" make sense syntactically? Shouldn't it be "a longo"?
  • What qualifies as a Cantor diagonal argument?
  • What is the first work of fiction to feature a vampire-human hybrid or dhampir vampire hunter as a protagonist?
  • How to clean a female disconnect connector
  • Blank export using Garamond fonts
  • The question about the existence of an infinite non-trivial controversy
  • Visuallizing complex vectors?
  • What was the first "Star Trek" style teleporter in SF?
  • Cardinality of connected LOTS
  • Do US universities invite faculty applicants from outside the US for an interview?
  • Using NDSolve to solve the PDEs and their reduced ODEs yields inconsistent results
  • Compacting biological traits arising from emergent biological mechanisms into singular artificial genes
  • Is the 2024 Ukrainian invasion of the Kursk region the first time since WW2 Russia was invaded?
  • Visual assessment of scatterplots acceptable?
  • What's the purpose of scanning the area before opening the portal?

unpublished research paper

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Out of sight but not...

Out of sight but not out of mind: how to search for unpublished clinical trial evidence

  • Related content
  • Peer review
  • An-Wen Chan , assistant professor and Phelan scientist
  • 1 Women’s College Research Institute, University of Toronto, Toronto, Ontario, Canada
  • Corrrespondence to: A-W Chan anwen.chan{at}utoronto.ca

A key challenge in conducting systematic reviews is to identify the existence and results of unpublished trials, and unreported methods and outcomes within published trials. An-Wen Chan provides guidance for reviewers on adopting a comprehensive strategy to search beyond the published literature

Summary points

The validity of systematic reviews relies on the identification of all relevant evidence

Systematic reviewers should search for unpublished information on the methods and results of published and unpublished clinical trials

The potential sources of unpublished information on clinical trials have expanded over recent years

Recognition of the strengths and limitations of these key information sources can help to identify areas for further emphasis and improvement

Systematic reviews of randomised trials play a key role in guiding patient care and health policy. Their validity depends to a large extent on reviewers’ ability to retrieve relevant information from all existing trials. Unfortunately, about half of clinical trials remain unpublished after receiving ethics approval—particularly those with statistically non-significant findings. 1 Even when published, most journal articles do not report all of the outcome data or key methodological information. 2 3 The overall result is that the published literature tends to overestimate the efficacy and underestimate the harms of a given intervention, while providing insufficient information for readers to evaluate the risk of bias.

It is thus important that systematic reviewers adopt a comprehensive strategy to search beyond the published literature. The optimal systematic review would have complete information about every trial—the full protocol, final study report, raw dataset, and any journal publications and regulatory submissions. 4 The eligibility and risk of bias for each trial could then be evaluated, regardless of its publication status.

There are several potential sources of unpublished information on trial methods and results (table ⇓ ). These sources can help to identify the existence and results of unpublished trials, as well as unreported outcomes within published trials. They can also provide methodological information that facilitates assessment of risk of bias, including the detection of discrepancies between unpublished and published methods. 5 6 Systematic reviewers should consider using all potential information sources as part of their search strategy, while keeping in mind the strengths and limitations of each source (table ⇓ ).

Potential sources of unpublished information on trial methods and results

  • View inline

Trial registries and results databases

Trial registries serve as a readily accessible online resource for identifying unpublished trials and unreported outcomes. Since 2005, prospective trial registration has gained broad acceptance as an important means of enhancing transparency and tracking the existence of clinical trials at inception. Key stakeholders—including medical journal editors, legislators, and funding agencies—provide enforcement mechanisms that have greatly improved adherence to registration practices.

Basic protocol information on ongoing and completed trials of any intervention type can be retrieved via the World Health Organization’s International Clinical Trials Registry Platform Search Portal ( www.who.int/trialsearch/ ). This searches records from national and international trial registries that meet certain standards, including WHO Primary Registries and ClinicalTrials.gov. Users can search the main registry fields using key words related to the study topic, sponsor, recruitment status, and sites. When the same trial is registered in multiple registries, the WHO Search Portal displays similar records together to facilitate identification of duplicate records. Some registry websites also provide access to the history of changes to the registered information fields.

In addition to basic protocol information, certain registries house study results. Since 2008, ClinicalTrials.gov has had the legislative mandate to record summary results for trials (other than phase I) that involve a drug or device regulated by the US Food and Drug Administration. 7 Sponsors are required by law to provide summary baseline and outcome data, which are displayed in a standard format.

Some pharmaceutical companies also maintain their own voluntary trial registers and results databases for drugs that have received regulatory approval. Systematic reviews have previously incorporated unpublished data retrieved from industry registers. 8 These public registers provide a synopsis of trial methods and summary results as dictated by company policy. Information is presented in various formats with non-standardised content. For certain companies, there may be information posted for older trials of some marketed interventions. It should be noted that ClinicalStudyResults.org, the results database launched by the International Federation of Pharmaceutical Manufacturers and Associations in 2004, was to be discontinued by the end of 2011 because of overlap with other registries.

Beyond basic protocol information and results, trial registries have the potential to be the repository for full protocols. Legislation in the US allows for the possibility of requiring submission of full protocols to ClinicalTrials.gov for applicable trials. 7 Furthermore, certain pharmaceutical companies are recognising the importance of public access to full protocols and have committed to posting them on their register for all published trials. 9 These are promising first steps towards facilitating access to protocols for all trials, regardless of publication status.

Despite their importance, trial registries and results databases have several limitations. Firstly, there is no universal mechanism for ensuring adherence to standards for registration or results disclosure, meaning that not all trials will be captured. Journal policy will be ineffective for trials that are not intended for publication, while current legislation does not pertain to procedural, educational, and other unregulated interventions. Secondly, the quality of registered information is highly variable and often uninformative. 7 10 11 12 13 Changes to registered information are common, 12 meaning that systematic reviewers should review the history of amendments for each registry record. Thirdly, even when a trial is fully registered with complete summary results presented, there is a limited amount of methodological information available that is largely inadequate for assessing the risk of bias. 10 This concern would be addressed if full protocols were made available on the registries. 9 14 Finally, most trials will not have been registered prior to the introduction of International Committee of Medical Journal Editors policy and WHO standards in 2005.

Regulatory agencies

Regulatory agencies have access to substantially more clinical trial information than the healthcare providers, patients, and researchers who use and evaluate the interventions. Successful attempts to obtain access to regulatory data have previously necessitated litigation and incurred lengthy delays. 15 16 17 Over recent years, regulatory agencies have recognised the need to address this untenable situation by increasing public access to information from regulatory submissions. 18 19

There are currently two main routes for reviewers to obtain trial data from regulatory agencies—scientific reviews posted in online databases, 20 21 and written requests to regulatory agencies. 15 Scientific reviews of regulatory submissions contain a narrative summary of the clinical trials that form the basis for approval of regulated drugs. These documents are generally available on searchable internet databases provided by the US Food and Drug Administration and the European Medicines Agency:

Drugs@FDA— www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm

European public assessment reports (EPAR)— www.ema.europa.eu/ema/index.jsp?curl=pages/medicines/landing/epar_search.jsp&murl=menus/medicines/medicines.jsp&mid=WC0b01ac058001d125&jsenabled=true

Relevant clinical trial summaries are generally labelled as “Statistical review” on Drugs@FDA, and “Scientific Discussion” in EPAR. The Pharmaceuticals and Medical Devices Agency in Japan ( http://www.pmda.go.jp/english/service/approved.html ) also posts a limited number of reviews with English translations for select drugs and devices.

Limitations of the scientific reviews obtained from regulatory agency websites include the variable presentation format and the lack of text search facility for some scanned documents. In addition, the content is not standardised, information deemed to be commercially sensitive is redacted, and insufficient methodological detail is provided to assess the risk of bias for a trial. Furthermore, many trials are not included in regulatory databases, such as trials of devices and non-regulated interventions. Most trials conducted after regulatory approval would not be captured. For the European Medicines Agency, drugs that are approved by regulators in individual countries but not the central agency will not have public assessment reports available. Drugs@FDA includes information on withdrawn drugs but does not provide scientific reviews for unapproved drugs or drugs approved before 1998.

A second approach has the potential to yield more detailed information from regulatory agencies. Reviewers can make written requests to access the trial protocols and detailed clinical study reports submitted by sponsors. As of December 2010, the European Medicines Agency has committed to accommodating such requests for documents contained in regulatory submissions for drugs, subject to redaction of commercially sensitive information. 19 This important advance will be expanded in the future to include proactive public disclosure of documents on the European Medicines Agency website as part of routine practice. The US Food and Drug Administration has previously granted access to clinical trial documents in response to litigation relating to freedom of information requests 16 17 and is also exploring ways to increase transparency. 18

Limitations of this second approach include potentially lengthy delays in receiving a final decision from regulators, resource-intensive appeals or litigation for denied requests, redaction of potentially important information from documents, and lack of information on interventions other than regulated drugs and devices.

Contacting trialists and sponsors

Systematic reviewers have had variable success in contacting trialists, clinicians, and sponsors for information about unpublished trials. 4 22 23 24 25 Efforts to obtain full trial protocols from trialists have been largely disappointing. 26 27 On the other hand, surveys soliciting information on the existence and statistical significance of unreported outcomes for published trials have had higher response rates from trialists. 28 29 These surveys have also yielded information about the reasons for changing or omitting trial outcomes.

Logistical obstacles include the burden of identifying up to date contact information and sending inquiries and reminders to a potentially large number of individuals who might have knowledge about existing trials. It is also likely that trials for which additional information is provided by investigators or sponsors will differ systematically from trials without such information provided.

Systematic reviewers will need to weigh up the potential yield and costs of contacting investigators and sponsors, which will vary depending on the topic and scope of the review. At a minimum, for each trial identified in the systematic review, it would be reasonable for reviewers to contact investigators to request full protocols as well as information on unreported outcomes, unpublished trials, and other areas of potential bias.

Other sources of information

In some cases trial protocols and results can be obtained from litigation documents. Examples include researchers who had access to internal company documents while serving as expert witnesses in litigation against pharmaceutical companies. 30 31 32 In many jurisdictions, these documents are deemed confidential and their use is restricted to the purposes of the particular litigation—unless unsealed through a court order or agreement by the company. Systematic reviewers who are external to the litigation could submit a request to have the documents unsealed by the court to serve the public interest, although this approach has not been widely tested for pharmaceutical data. More extensive experience with public availability and archiving of litigation documents exists for other industries. 33

Another potential source of information consists of conference abstracts. 34 The Cochrane handbook lists several databases of abstracts that can be useful to search. 35 Given the limited amount of information on trial methods and results contained in abstracts, their usefulness lies mainly with identifying the existence of a trial and the types of outcomes measured.

Finally, an internet search of key words can be done to locate full trial protocols in a relatively short amount of time. The median search time in one systematic review was 12 minutes per trial, with protocols being found for five of 42 trials. 36 The retrieved documents are often those posted on the websites of specific trials, trial groups, and funders.

Conclusions

Given the dangers of selective data suppression and biased study design or conduct, it is critical that systematic reviewers search beyond the literature for additional information on both published and unpublished trials. The potential sources of information on study methods and results have expanded over recent years, particularly for pharmaceutical trials. These sources can provide complementary trial information that can be collated and compared to identify discrepancies and evaluate the risk of bias.

It is important to recognise the limitations and variable yield of existing information sources. Much work remains to ensure that comprehensive, high quality information is publicly available for all trials, including full protocols, clinical study reports, and raw datasets. 4 14 37 There is also a need to develop rigorous methods for reviewing the large amount of unpublished trial information that can potentially be retrieved. 4 15 Only with continued advances in access to clinical trial information can the systematic evaluation of health interventions become more accurate, efficient, and reliable for patient care.

Cite this as: BMJ 2012;344:d8013

  • Editorials doi:10.1136/bmj.d8158

Contributors: A-WC was responsible for interpretation of information, drafting the article, and final approval of the version to be published.

Competing interests: All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.”

Provenance and peer review: Commissioned; externally peer reviewed.

  • ↵ Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: An updated review of related biases. Health Technol Assess 2010 ; 14 : 1 -193. OpenUrl PubMed Web of Science
  • ↵ Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008 ; 3 : e3081 . OpenUrl CrossRef PubMed
  • ↵ Hopewell S, Dutton S, Yu LM, Chan A-W, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ 2010 ; 340 : c723 . OpenUrl Abstract / FREE Full Text
  • ↵ Jefferson T, Doshi P, Thompson M, Heneghan C. Ensuring safe and effective evidence for drugs—who can do what it takes? BMJ 2011 ; 342 : c7258 . OpenUrl FREE Full Text
  • ↵ Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011 ; 343 : d5928 . OpenUrl FREE Full Text
  • ↵ Chan A-W, Hróbjartsson A, Jørgensen KJ, Gøtzsche PC, Altman DG. Discrepancies in sample size calculations and data analyses reported in randomized trials: comparison of publications with protocols. BMJ 2008 ; 337 : a2299 . OpenUrl Abstract / FREE Full Text
  • ↵ Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database—update and key issues. N Engl J Med 2011 ; 364 : 852 -60. OpenUrl CrossRef PubMed Web of Science
  • ↵ Nissen SE, Wolski K. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. N Engl J Med 2007 ; 356 : 2457 -71. OpenUrl CrossRef PubMed Web of Science
  • ↵ GlaxoSmithKline. Public disclosure of clinical research. Global Public Policy Issues, October 2011. Available from www.gsk.com/policies/GSK-on-disclosure-of-clinical-trial-information.pdf .
  • ↵ Reveiz L, Chan A-W, Krleža-Jerić K, Granados CE, Pinart M, Etxeandia I, et al. Reporting of methodologic information on trial registries for quality assessment: A study of trial records retrieved from the WHO search portal. PLoS ONE 2010 ; 5 : e12484 . OpenUrl CrossRef PubMed
  • ↵ Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.gov: a cross-sectional analysis. PLoS Med 2009 ; 6 : e1000144 . OpenUrl CrossRef PubMed
  • ↵ Huić M, Marušić M, Marušić A. Completeness and changes in registered data and reporting bias of randomized controlled trials in ICMJE journals after trial registration policy. PLoS One 2011 ; 6 : e25258 . OpenUrl CrossRef PubMed
  • ↵ Viergever RF, Ghersi D. The quality of registration of clinical trials. PLoS One 2011 ; 6 : e14701 . OpenUrl CrossRef PubMed
  • ↵ Chan A-W. Access to clinical trial data. BMJ 2011 ; 342 : d80 . OpenUrl CrossRef PubMed
  • ↵ Gøtzsche PC, Jørgensen AW. Opening up data at the European Medicines Agency. BMJ 2011 ; 342 : d2686 . OpenUrl FREE Full Text
  • ↵ Kesselheim AS, Mello MM. Confidentiality laws and secrecy in medical research: Improving public access to data on drug safety. Health Aff (Millwood) 2007 ; 26 : 483 -91. OpenUrl Abstract / FREE Full Text
  • ↵ Lurie P, Zieve A. Sometimes the silence can be like the thunder: Access to pharmaceutical data at the FDA. Law Contemporary Problems 2008 ; 69 : 85 -97. OpenUrl
  • ↵ Asamoah AK, Sharfstein JM. Transparency at the Food and Drug Administration. N Engl J Med 2010 ; 362 : 2341 -3. OpenUrl CrossRef PubMed Web of Science
  • ↵ European Medicines Agency. EMA/110196/2006. European Medicines Agency policy on access to documents (related to medicinal products for human and veterinary use), POLICY/0043. 2010.
  • ↵ Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the food and drug administration: review of publication and presentation. PLoS Med 2008 ; 5 : e217 . OpenUrl CrossRef PubMed
  • ↵ Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008 ; 358 : 252 -60. OpenUrl CrossRef PubMed Web of Science
  • ↵ Reveiz L, Cardona AF, Ospina EG, de Agular S. An e-mail survey identified unpublished studies for systematic reviews. J Clin Epidemiol 2006 ; 59 : 755 -8. OpenUrl CrossRef PubMed
  • ↵ McGrath J, Davies G, Soares K. Writing to authors of systematic reviews elicited further data in 17% of cases. BMJ 1998 ; 316 : 631 . OpenUrl FREE Full Text
  • ↵ Clarke M, Greaves L. Identifying relevant studies for systematic reviews. BMJ 1995 ; 310 : 741 . OpenUrl FREE Full Text
  • ↵ Hetherington J, Dickersin K, Chalmers I, Meinert CL. Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Pediatrics 1989 ; 84 : 374 -80. OpenUrl Abstract / FREE Full Text
  • ↵ Smyth RM, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR. Frequency and reasons for outcome reporting bias in clinical trials: Interviews with trialists. BMJ 2011 ; 342 : c7153 . OpenUrl Abstract / FREE Full Text
  • ↵ Hahn S, Williamson PR, Hutton JL. Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract 2002 ; 8 : 353 -9. OpenUrl CrossRef PubMed Web of Science
  • ↵ Chan A-W, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ 2005 ; 330 : 753 . OpenUrl Abstract / FREE Full Text
  • ↵ Chan A-W, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 2004 ; 291 : 2457 -65. OpenUrl CrossRef PubMed Web of Science
  • ↵ Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med 2009 ; 361 : 1963 -71. OpenUrl CrossRef PubMed Web of Science
  • ↵ Ross JS, Madigan D, Hill KP, Egilman DS, Wang Y, Krumholz HM. Pooled analysis of rofecoxib placebo-controlled clinical trial data: lessons for postmarket pharmaceutical safety surveillance. Arch Intern Med 2009 ; 169 : 1976 -85. OpenUrl CrossRef PubMed
  • ↵ Psaty BM, Kronmal RA. Reporting mortality findings in trials of rofecoxib for Alzheimer disease or cognitive impairment: A case study based on documents from rofecoxib litigation. JAMA 2008 ; 299 : 1813 -7. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bero L. Implications of the tobacco industry documents for public health and policy. Annu Rev Public Health 2003 ; 24 : 267 -88. OpenUrl CrossRef PubMed Web of Science
  • ↵ Dundar Y, Dodd S, Dickson R, Walley T, Haycox A, Williamson PR. Comparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies. Health Technol Assess 2006 ;10(5).
  • ↵ Higgins JPT, Green S, eds. 6.2.2.4 Conference abstracts or proceedings. In: Cochrane handbook for systematic reviews of interventions . Version 5.1.0. Cochrane Collaboration, 2011 . Available from www.cochrane-handbook.org .
  • ↵ Hartling L, Bond K, Vandermeer B, Seida J, Dryden DM, Rowe BH. Applying the risk of bias tool in a systematic review of combination long-acting beta-agonists and inhaled corticosteroids for persistent asthma. PLoS One 2011 ; 6 : e17242 . OpenUrl CrossRef PubMed
  • ↵ Krumholz HM, Ross JS. A model for dissemination and independent analysis of industry data. JAMA 2011 ; 306 : 1593 -4. OpenUrl CrossRef PubMed Web of Science

unpublished research paper

Banner

APA Referencing - Education & CCSC students: Unpublished or informally published work

  • Abbreviations
  • Journal article
  • Quotes & citations
  • Reference lists
  • Referencing questions
  • Audiovisual works
  • Brochure or pamphlet
  • Conference paper
  • Dictionary/Encyclopedia
  • Government publication
  • Gray literature
  • Group author
  • Interviews/Research data
  • Lecture notes/Tutorial material
  • Newspaper/Magazine
  • Personal communication
  • Self-referencing
  • Software app
  • Figures & tables

Unpublished or informally published work

How to reference an unpublished or informally published work.

As with all referencing in academic writing, referencing is a matter of establishing the authority of the source or information you are relying upon as evidence to support the claims you make in your writing. This is the reason for peer review as it is a process that establishes the authority of a work through expert checking. Peer-reviewed published works are accepted as having greater authority than works that are not peer reviewed. Sometimes, however, the most useful research article might not be available as a peer-reviewed published article but it is available to us in an unpublished form. Use other peer-reviewed articles if possible but if there is a lack of published research reports and, for example, a pre-press version is available directly from the author, you may use it. Check whether the article has been published before submitting your final assignment or thesis and, if it has, reference the final version, taking into account any changes that the editors may have required in the peer-review process.

Unpublished and informally published works include:

  • work in progress
  • work submitted for publication
  • work prepared for publication but not submitted

a university website

An electronic archive such as academia.edu or researchgate.

  • the author's personal website

In-text citation

Reference list

Author, A. A. (Year).  Title of manuscript.  Unpublished manuscript [or "manuscript submitted for publication," or "Manuscript in preparation"].

If the unpublished manuscript is from a university, give this information at the end.

If you locate the work on an electronic archive, give this information at the end.

If a URL is available, give it at the end. 

If you use a pre-print version of an article that is later published, reference the published version.

  • << Previous: Figures & tables
  • Last Updated: Jul 29, 2024 10:52 AM
  • URL: https://morlingcollege.libguides.com/apareferencing

IRSC Libraries Home

APA 7th Edition Style Guide: Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)

  • About In-text Citations
  • In-Text Examples
  • What to Include
  • Volume/Issue
  • Bracketed Descriptions
  • URLs and DOIs
  • Book with Editor(s)
  • Book with No Author
  • Book with Organization as Author
  • Book with Personal Author(s)
  • Chapters and Parts of Books
  • Classical Works
  • Course Materials
  • Journal Article
  • Magazine Article
  • Multi-Volume Works
  • Newspaper Article
  • Patents & Laws
  • Personal Communication
  • Physicians' Desk Reference
  • Social Media

Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)

  • Formatting Your Paper
  • Formatting Your References
  • Annotated Bibliography
  • Headings in APA
  • APA Quick Guide
  • Submit your Paper for APA Review

Formatting your References

Once you type your references on the reference page, you will need to put in a hanging indent and double-space the entire reference list. In Microsoft Word, highlight the references from A to Z, then find the paragraph function in the Word ribbon. Select Hanging under Indentation and Double under spacing. See the Formatting your References tab for instructions on doing this on a Mac or in Google Docs.

Abbas, D. D. F. (2020). Manipulating of audio-visual aids in the educational processes in Al-Hilla University College. International Journal of Psychosocial Rehabilitation, 24 (3), 1248-1263. https://doi.org.db12.linccweb.org/10.37200/ijpr/v24i3/pr200875

   

                                            

Cite previously published material

(Hirsh & Rangan, 2013).

(1), 21-23.

   

Cite unpublished or unattributed material (author listed) (Bronson, 2013).

Bronson, E. (2013). Table of company earnings by growth rate. In F. Harber (Comp.),  (pp. 15-16). Indian River State College.

 

Cite unpublished or unattributed material (no author) ("Table of company," 2013).

Table of company earnings by growth rate. (2013). In F. Harber (Comp.),  pp. 15-16. Fort Pierce, FL: Indian River State College.

 

   
Cite unpublished dissertation or thesis (Skidmore, 2017). Skidmore, K. L. (2017). (Unpublished master's thesis). Nova Southeastern University, Fort Lauderdale, FL.
Cite a dissertation published in a subscription database (Woods, 2014).

Woods, S. (2014). (Doctoral dissertation). Retrieved from ProQuest Criminal Justice Database. (Order No. 3665295)

  • << Previous: Social Media
  • Next: Websites >>
  • Last Updated: Sep 3, 2024 11:32 AM
  • URL: https://irsc.libguides.com/APA

unpublished research paper

Open Access Theses and Dissertations

Thursday, April 18, 8:20am (EDT): Searching is temporarily offline. We apologize for the inconvenience and are working to bring searching back up as quickly as possible.

Advanced research and scholarship. Theses and dissertations, free to find, free to use.

Advanced search options

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in any language English Portuguese French German Spanish Swedish Lithuanian Dutch Italian Chinese Finnish Greek Published in any country US or Canada Argentina Australia Austria Belgium Bolivia Brazil Canada Chile China Colombia Czech Republic Denmark Estonia Finland France Germany Greece Hong Kong Hungary Iceland India Indonesia Ireland Italy Japan Latvia Lithuania Malaysia Mexico Netherlands New Zealand Norway Peru Portugal Russia Singapore South Africa South Korea Spain Sweden Switzerland Taiwan Thailand UK US Earliest date Latest date

Sorted by Relevance Author University Date

Only ETDs with Creative Commons licenses

Results per page: 30 60 100

October 3, 2022. OATD is dealing with a number of misbehaved crawlers and robots, and is currently taking some steps to minimize their impact on the system. This may require you to click through some security screen. Our apologies for any inconvenience.

Recent Additions

See all of this week’s new additions.

unpublished research paper

About OATD.org

OATD.org aims to be the best possible resource for finding open access graduate theses and dissertations published around the world. Metadata (information about the theses) comes from over 1100 colleges, universities, and research institutions . OATD currently indexes 7,225,126 theses and dissertations.

About OATD (our FAQ) .

Visual OATD.org

We’re happy to present several data visualizations to give an overall sense of the OATD.org collection by county of publication, language, and field of study.

You may also want to consult these sites to search for other theses:

  • Google Scholar
  • NDLTD , the Networked Digital Library of Theses and Dissertations. NDLTD provides information and a search engine for electronic theses and dissertations (ETDs), whether they are open access or not.
  • Proquest Theses and Dissertations (PQDT), a database of dissertations and theses, whether they were published electronically or in print, and mostly available for purchase. Access to PQDT may be limited; consult your local library for access information.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Cochrane Database Syst Rev

Methods for obtaining unpublished data

In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data), as either an abstract or full‐text paper, as well as missing data (data available to original researchers but not reported) in published abstracts or full‐text publications. The effectiveness of different methods used to obtain unpublished or missing data has not been systematically evaluated.

To assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews.

Search methods

We identified primary studies comparing different methods of obtaining unpublished studies (data) or missing data by searching the Cochrane Methodology Register (Issue 1, 2010), MEDLINE and EMBASE (1980 to 28 April 2010). We also checked references in relevant reports and contacted researchers who were known or who were thought likely to have carried out relevant studies. We used the Science Citation Index and PubMed 'related articles' feature to identify any additional studies identified by other sources (19 June 2009).

Selection criteria

Primary studies comparing different methods of obtaining unpublished studies (data) or missing data in the healthcare setting.

Data collection and analysis

The primary outcome measure was the proportion of unpublished studies (data) or missing data obtained, as defined and reported by the authors of the included studies. Two authors independently assessed the search results, extracted data and assessed risk of bias using a standardised data extraction form. We resolved any disagreements by discussion.

Main results

Six studies met the inclusion criteria; two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.

Methods to obtain missing data

Five studies, two randomised studies and three observational comparative studies, assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study).

Two studies found that correspondence with study authors by e‐mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e‐mail or surface mail) versus a multistage approach (pre‐notification, request for missing information and active follow‐up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well‐known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.

Methods to obtain unpublished studies

One observational comparative study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non‐specific request.

Authors' conclusions

Those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. Contacting authors by e‐mail results in the greatest response rate with the fewest number of attempts and the shortest time to respond.

Plain language summary

This methodology review was conducted to assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews. Six studies met the inclusion criteria, two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.

Five studies assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study). Two studies found that correspondence with study authors by e‐mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e‐mail or surface mail) versus a multistage approach (pre‐notification, request for missing information and active follow‐up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well‐known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.

One study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non‐specific request.

Description of the problem or issue

Reporting bias arises when dissemination of research findings is influenced by the nature and direction of results. Publication bias (the selective publication of research studies as a result of the strength of study findings), time‐lag bias (the rapid or delayed publication of results depending on the results) and language bias (the publication in a particular language depending on the nature and direction of the results) are typical types of reporting bias ( Higgins 2009 ).

Publication bias, especially, is a major threat to the validity of systematic reviews ( Song 2000 ; Sterne 2008 ). Hopewell et al examined the impact of grey literature (literature which has not formally been published) in meta‐analysis of randomised trials of healthcare interventions and found that published trials tend to be larger and show an overall greater treatment effect than trials from grey literature ( Hopewell 2007 ). Not making an attempt to include unpublished data in a systematic review can thus result in biased larger treatment effects ( Higgins 2009 ).

In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data) as either an abstract or full‐text paper, as well as missing data (data available to the original researchers but not reported) in published abstracts or full‐text publications. Types of data commonly missing from published papers include details of allocation concealment and blinding, information about loss to follow‐up and standard deviations. This is different from data that are 'missing' because the original researchers do not have them, but might be able to get them (e.g. a specific subgroup analysis not done by the original researchers, but which could be carried out retrospectively in response to a request from systematic review authors) or data that are missing because they were never collected by the original researchers and are not retrievable by other means (e.g. patient’s quality of life at specific time points).

Often, the search for and retrieval of unpublished and missing data delays the time to review completion.

Description of the methods being investigated

Different methods are used to search for and obtain unpublished data or missing data from studies to be included in systematic reviews.

Authors of systematic reviews informally contact colleagues to find out if they know about unpublished studies ( Greenhalgh 2005 ). In addition, formal requests for information on completed but unpublished studies, as well as ongoing studies, are sent to researchers (authors of identified included studies of the relevant review), experts in the field, research organisations and pharmaceutical companies ( Lefebvre 2008 ; Song 2000 ). Some organisations might set up websites for systematic review projects, listing the studies identified to date and inviting submission of information on studies not already listed ( Lefebvre 2008 ).

Prospective clinical trial registries, both national and international, are also searched to identify ongoing studies. Plus, registries of grey literature are searched to identify unpublished studies.

In order to obtain details about missing data (data available to the original researchers but not reported) authors of systematic reviews contact the authors of studies included in the review by telephone, e‐mail or letters by post.

How these methods might work

Approaching researchers for information about completed but never published studies has had varied results, ranging from being willing to share information to no response ( Greenhalgh 2005 ). The willingness of investigators of located unpublished studies to provide data may depend upon the findings of the study, where more favourable results may be shared more willingly ( Smith 1998 ).

Why it is important to do this review

The effectiveness of the different methods used to obtain unpublished or missing data has not been systematically evaluated. This review will systematically evaluate these effects and will thus assist authors of reviews in improving their efficiency in conducting their reviews.  

Criteria for considering studies for this review

Types of studies.

Primary studies comparing different methods of obtaining unpublished studies (data) or missing data. We excluded studies without a comparison of methods.

Types of data

All relevant studies in the healthcare setting.

Types of methods

Any method designed to obtain unpublished studies (data) or missing data (i.e. data available to researchers but not reported).

Types of outcome measures

Primary outcomes.

Methods to obtain missing data (data available to researchers but not reported in the published study).

  • Proportion of missing data obtained as defined and reported by authors.

Methods to obtain unpublished studies (data for studies that have never been published).

  • Proportion of unpublished studies (data) obtained as defined and reported by authors.

Secondary outcomes

Methods to obtain missing data (data available to the original researchers but not reported in the published study).

  • Completeness (extent to which data obtained answers to the questions posed by those seeking the data) of missing data obtained.
  • Type (e.g. outcome data, baseline data) of missing data obtained.   
  • Time taken to obtain missing data (i.e. time from when efforts start until data are obtained).
  • Number of attempts (as defined by the authors) made to obtain missing data.
  • Resources required.
  • Time taken to obtain unpublished studies (i.e. time from when efforts start until data are obtained).
  • Number of attempts (as defined by the authors) made to obtain unpublished studies (data).

Search methods for identification of studies

To identify studies we carried out both electronic and manual searches. All languages were included.

Electronic searches

We searched the Cochrane Methodology Register (CMR) (Issue 1, 2009) using the search terms in Appendix 1 . We searched MEDLINE and Ovid MEDLINE(R) In‐Process & Other Non‐Indexed Citations using OVID (1950 to 10 February 2009) ( Appendix 2 ) and adapted these terms for use in EMBASE (1980 to 2009 Week 06) ( Appendix 3 ). We conducted an updated search in EMBASE, MEDLINE and the Cochrane Methodology Register on 28 April 2010.

Searching other resources

We also checked references in relevant reports ( Horsley 2011 ) and contacted researchers who were known or who were thought likely to have carried out relevant studies. We used the Science Citation Index and PubMed 'related articles' feature to identify any additional studies identified by the sources above (19 June 2009).

Selection of studies

Two authors independently screened titles, abstracts and descriptor terms of the electronic search results for relevance based on the criteria for considering studies for this review. We obtained full‐text articles (where available) of all selected abstracts and used an eligibility form to determine final study selection. We resolved any disagreements through discussion.

Data extraction and management

Two authors independently extracted data using a standardised data extraction form. We resolved any disagreements by discussion.

Data extracted included the following.

  • Administrative details for the study ‐ identification; author(s); published or unpublished; year of publication; year in which study was conducted; details of other relevant papers cited.
  • Details of study ‐ study design; inclusion and exclusion criteria; country and location of the study.
  • Details of intervention ‐ method(s) used to obtain unpublished or missing data.
  • Details of outcomes and results ‐ proportion, completeness and type of unpublished or missing data; time taken to obtain unpublished or missing data; number of attempts made and resources required.

Assessment of risk of bias in included studies

Two authors independently evaluated the risk of bias of included studies, which included both the inherent properties of the study and the adequacy of its reporting.

For randomised studies comparing different methods to obtain data we assessed the following criteria, based on The Cochrane Collaboration's 'Risk of bias' tool and classified as adequate, inadequate or unclear:

  • generation of the allocation sequence;
  • concealment of the allocation sequence;
  • blinding of the participants, personnel and outcome assessor.

For non‐randomised studies comparing different methods to obtain data we assessed the following criteria and reported whether they were adequate, inadequate or unclear:

  • how allocation occurred;
  • attempt to balance groups by design;
  • use of blinding.

Based on these criteria, we assessed studies as being at 'high', 'low' or 'moderate' risk of bias.

Dealing with missing data

If any of the data were insufficient or missing, we sought data from the contact author of the empirical study using e‐mail. This was successful for one of the two studies ( Higgins 1999 ) for which we contacted the authors.

Data synthesis

Due to significant differences in study design, it was not possible to carry out a meta‐analysis of the included studies. Therefore the results of the individual studies are presented descriptively, reporting individual study effect measures and 95% confidence intervals where available.

Description of studies

See Characteristics of included studies and Characteristics of excluded studies .

Results of the search

Of 4768 identified abstracts and titles, we selected 18 potentially eligible publications, referring to 15 studies, for detailed independent eligibility assessment ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is nMR000027-AFig-FIG01.jpg

Study flow diagram.

Included studies

Six studies ( Brown 2003 ; Gibson 2006 ; Guevara 2005 ; Higgins 1999 ; Milton 2001 ; Shukla 2003 ) met the inclusion criteria. Of these five were published as abstracts for Cochrane Colloquia and one as a full paper ( Gibson 2006 ). Only one observational comparative study evaluated the effects of different methods to obtain unpublished data ( Shukla 2003 ). The other five studies, two randomised studies ( Higgins 1999 ; Milton 2001 ) and three observational comparative studies ( Brown 2003 ; Gibson 2006 ; Guevara 2005 ), evaluated different methods for obtaining missing data. Table 1 provides a summary of the interventions studies and outcomes measured.

E‐mail versus letter providing a semi‐personalised information retrieval sheetAdditional information retrieved through contact with trial authors
Costs incurred
E‐mail versus letter versus bothProportion of responders over time
Response rates United States compared to other countries
E‐mail versus letter versus faxResponse rate
Time to response
Single request for missing information (by e‐mail or surface mail) versus multistage approach involving pre‐notification, request for missing information and active follow‐upWhether contact is established
Whether appropriate information is obtained
Using a well‐known signatory versus an unknown researcher on the cover letter of a mailed questionnaireResponse of clinical trial investigators to requests for information
Identifying unpublished studies ahead of time and then asking industry to provide further specific detail versus general request by letter for unpublished studiesUnpublished information obtained from the drug industry

Excluded studies

Nine studies ( Bohlius 2003 ; Eysenbach 2001 ; Hadhazy 1999 ; Hetherington 1987 ; Kelly 2002 ; Kelly 2004 ; McGrath 1998 ; Reveiz 2004 ; Wille‐Jorgensen 2001 ) did not meet the inclusion criteria as there was no comparison of different methods of obtaining missing data.

Risk of bias in included studies

Brown 2003 , Gibson 2006 , Guevara 2005 and Shukla 2003 , the four observational, comparative studies, did not report on the methodology used and therefore assessments of risk of bias for these studies are incomplete.

We assessed risk of bias for the two randomised studies by looking at the methods used for allocation sequence generation, allocation concealment and blinding. Allocation concealment was adequate for Higgins 1999 and unclear for Milton 2001 . Allocation sequence generation and blinding were not reported.

Effect of methods

Five of the six studies assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study).

Proportion of missing data obtained as defined and reported by authors

All five studies provided information on the proportion of missing data obtained.

Brown 2003 used a non‐randomised design to compare contacting 112 authors (of 139 studies) via 39 e‐mails and 73 letters. The study was designed as a comparative study but data per study arm were not reported. Twenty‐one replies (19%) were received. One study published in the period 1980‐1984 elicited no response, nine 1985‐1989 studies elicited two responses, 41 1990‐1994 studies elicited six responses, 38 1995‐1999 studies elicited eight responses and 21 2000‐2002 studies elicited four responses.

Gibson 2006 used a non‐randomised design to compare contacting authors by e‐mail, letter or both. Two hundred and forty‐one studies (40%) had missing or incomplete data. They were unable to locate 95 authors (39%). Of the remaining 146 authors, 46 authors (32%) responded to information requests. The response rate differed by mode of contact ‐ letter (24%), e‐mail (47%) and both (73%). Response was significantly higher with e‐mail compared to using letters (hazard ratio 2.5; 95% confidence interval (CI) 1.3 to 4.0). Combining letter and e‐mail had a higher response rate, however, it was not significantly different from using e‐mail alone (reported P = 0.36). The combination of methods (letter plus e‐mail follow‐up) rather than multiple contacts using the same method was more effective for eliciting a response from the author. Response rates from US authors did not differ from those of other countries. The older the article, the less likely the response.

Guevara 2005 used a non‐randomised design to compare e‐mail versus letter versus fax. Fifteen authors (60%) responded to information requests. E‐mail resulted in fewer attempts and a greater response rate than post or fax. Authors of studies published after 1990 were as likely to respond (67% versus 50%, reported P = 0.45) as authors of studies published earlier. Similarly, corresponding authors were no more likely to respond (58% versus 9%, reported P = 0.44) than secondary authors, although few secondary authors were contacted.

Higgins 1999 used a randomised comparison of single request for missing information (by e‐mail or surface mail) (n = 116) versus a multistage approach involving pre‐notification, request for missing information and active follow‐up (n = 117) and found no significant difference between the two groups (risk ratio (RR) 1.04; 95% CI 0.74 to 1.45) in response rate.

Milton 2001 compared, using a randomised design, the response of clinical trial investigators to requests for information signed by either Richard Smith (RS), editor of the British Medical Journa l (n = 96), or an unknown researcher (n = 48) and found no significant differences between signatory groups in response rates. By three weeks, 34% in the former and 27% in the unknown researcher's group had responded (odds ratio (OR) 1.35; 95% CI 0.59 to 3.11). No baseline data had been provided by three weeks. By the end of the study, at five weeks, 74% and 67% respectively had responded (OR 1.42; 95% CI 0.62 to 3.22) and 16 out of 53 studies in the RS group and five out of 27 authors in the unknown researcher's group had provided baseline data (OR 1.90; CI 0.55 to 6.94).

Completeness of data

One of the five studies assessed the extent to which data obtained answers to the questions posed by those seeking the data.

Higgins 1999 compared, using a randomised design, the completeness of information retrieved between study arms (single request for missing information (by e‐mail or surface mail) (n = 116) versus multistage approach involving pre‐notification, request for missing information and active follow‐up (n = 117)) and found no significant difference between the two study methods. 

Type of missing data obtained

Two of the five studies assessed the type of missing data obtained.

Brown 2003 used a non‐randomised design to compare contacting 112 authors (of 139 studies) via 39 e‐mails and 73 letters and received 21 replies (19%), of which nine provided relevant outcome and quality data, one provided additional data on study quality only and one provided information regarding duplicate publications. Eleven studies provided no useful information. Data per study arm were not reported.

Guevara 2005 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that requests for clarification of methods resulted in a greater response (50% versus 32%, P = 0.03) than requests for missing data. Once again, data per study arm were not reported.

Time taken to obtain missing data

Two of the five studies assessed the time taken to obtain missing data (i.e. time from when efforts start until data are obtained).

Gibson 2006 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that the time to respond differed significantly by contact method (P < 0.05): e‐mail (3 +/‐ 3 days; median one day), letter (27 +/‐ 30 days; median 10 days) and both (13 +/‐ 12 days; median nine days).

Guevara 2005 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that e‐mail had a shorter response time than post or fax.

Number of attempts made to obtain missing data

One of the five studies assessed the number of attempts made to obtain missing data.

Gibson 2006 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that the number of items requested per authors averaged two or more. The number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.

Resources required

One of the five studies assessed the resources required to obtain missing data.

Brown 2003 used a non‐randomised design to compare contacting 112 authors (of 139 studies) via 39 e‐mails and 73 letters and reported total costs of 80 GBP for printing and postage. Cost was not reported per study arm.

One of the six included studies assessed methods to obtain unpublished studies (i.e. data for studies that have never been published).

Proportion of unpublished studies (data) obtained as defined and reported by authors

Shukla 2003 , using a non‐randomised design, assessed two different approaches to seek unpublished information from the drug industry. The outcome of a general request letter was compared with efforts to identify unpublished data and then contacting the industry to provide further specific detail. With the first approach, no unpublished information was obtained. With the second approach, relevant unpublished information was obtained for four of five of the systematic reviews (in the form of manuscripts or oral/poster presentations).

No information was available for the following secondary outcome measures.

Despite extensive searches we identified only six studies as eligible for inclusion in this review. Of these, five were published as abstracts and one as a full paper. Due to lack of high‐quality studies the results should be interpreted with caution. Five studies, two randomised studies and three observational comparative studies evaluated different methods for obtaining missing data (e.g. data available to the original researchers but not reported in the published study). Two studies found that correspondence with study authors by e‐mail resulted in the greatest response rate with the fewest number of attempts and the shortest time to respond, when compared with correspondence by fax or letter. Combining letter and e‐mail had a higher response rate, however, it was not significantly different from using e‐mail alone. Another study found that you were more likely to solicit a response from authors whose studies were published more recently. In addition, requests for clarification of the study methods appeared to result in a greater response rate than requests for missing data about the study results.

The effect of a single request for missing information (by e‐mail or surface mail) versus a multistage approach (pre‐notification, request for missing information and active follow‐up) did not appear to affect the rate of response or the completeness of information retrieved; neither did the number of attempts made to obtain missing data or the number of items requested. Interestingly, the use of a well‐known signatory also had no significant effect on the likelihood of authors responding to a request for unpublished information. Only one study evaluated the effects of different methods to obtain unpublished data (e.g. data for studies that have never been published). This found that leg‐work ahead of time to clarify and request the specific unpublished study information required can prove to be more fruitful than sending of a non‐specific request. The Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2009 ) suggests that review authors also consider contacting colleagues to find out if they are aware of any unpublished studies; we did not find any studies addressing the effectiveness of this approach.

When considering the findings from this review it is important to consider the limitations in the completeness of the available data and how this weakens the strength of any recommendations we are able to draw. The general problem that a large proportion of conference abstracts do not get published in full has been shown by others ( Scherer 2007 ) and it was recently found that about two‐thirds of the research studies presented at Cochrane Colloquia do not get published in full ( Chapman 2010 ). We encountered this problem in this review with five of the six studies being available only as abstracts at Colloquia. They lacked information about the study methodology and detailed results, and were never written up and published in full. Despite attempts to contact the authors of these studies we were only able to obtain additional information for one of the five studies. Ironically, our systematic review is subject to the same problems of obtaining missing data which our review is trying to address. Assessment of risk of bias was also hampered by incomplete data; the four observational studies did not report on the study methods and only one of the two randomised studies reported on the method of allocation concealment. The Brown 2003 study was designed as a comparative study, however only combined results were reported. The study is therefore reported in this review as though it was a non‐comparative study report of the experience of contacting original authors.

Missing and incomplete data continue to be a major problem and potential source of bias for those carrying out systematic reviews. If data were missing from study reports at random then there would be less information around but that missing information would not necessarily be biased. The problem is that there is considerable evidence showing that studies are more likely to be published and published more quickly if they have significant findings ( Scherer 2007 ). Even when study results are published, there is evidence to show that authors are more likely to report significant study outcomes as opposed to non‐significant study outcomes ( Kirkham 2010 ). The findings from our review support the current recommendations in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2009 ) that those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. In the absence of being able to contact authors to obtain missing data, review authors should also consider the potential benefits of searching prospective clinical trial registries and trial results registers for missing data. For example, in 2007 the US government passed legislation that the findings for all US government funded research should be included on www.clinicaltrials.gov within one year of study completion, thus making available previously unpublished information. The setting up of websites for systematic review projects, listing the studies identified to date and inviting submission of information on studies not already listed ( Lefebvre 2008 ), has also been proposed as a way of identifying unpublished studies.

Implication for methodological research

The strength of the evidence included in this review is limited by the completeness of the available data; five of the six studies included in this review lacked information about the study methodology and their results. Despite extensive searching only one study assessed methods for obtaining unpublished data. Further robust, comparative, well‐conducted and reported studies are needed on strategies to obtain missing and unpublished data.

Acknowledgements

We are very grateful to Julian Higgins who provided us with additional information regarding the First Contact study.

Appendix 1. Cochrane Methodology Register search strategy

#1 ("study identification" next general) or ("study identification" next "publication bias") or ("study identification" next "prospective registration") or ("study identification" next internet) or ("data collection") or ("missing data") or ("information retrieval" next general) or ("information retrieval" next "retrieval techniques") or ("information retrieval" next "comparisons of methods"):kw  in Methods Studies

#2 (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (grey or unpublished or "un published" or "not published"):ti or (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (grey or unpublished or "un published" or "not published"):ab

#3 (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (missing or missed or insufficient or incomplete or lack* or addition*):ti or (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (missing or missed or insufficient or incomplete or lack* or addition*):ab

#4 (missing or incomplete or unpublished or "un published" or "not published") NEAR/3 (data or information or study or studies or evidence or trial or trials):ti or (missing or incomplete or unpublished or "un published" or "not published") NEAR/3 (data or information or study or studies or evidence or trial or trials):ab

#5 (bad or ambiguous or insufficient or incomplete) NEAR/6 report*:ti or (bad or ambiguous or insufficient or incomplete) NEAR/3 report*:ab

#6 (#1 OR #2 OR #3 OR #4 OR #5)

Appendix 2. MEDLINE search strategy

1. ((request$ or obtain$ or identify$ or locat$ or find$ or detect$ or search or ask for) adj3 (grey or unpublished or "un published" or "not published") adj3 (data or information or evidence or study or studies or trial? or paper? or article? or report? or literature or work)).tw.

2. ((request$ or obtain$ or identify$ or locat$ or find$ or detect$ or search or ask for) adj3 (missing or insufficient or incomplete or lack$ or addition$) adj3 (data or information or evidence)).tw.

3. ((bad or ambiguous or insufficient or incomplete) adj6 reporting).tw.

4. 1 or 2 or 3

5. (2000$ or 2001$ or 2002$ or 2003$ or 2004$ or 2005$ or 2006$ or 2007$ or 2008$ or 2009$).ep.

Appendix 3. EMBASE search strategy

5. (2004$ or 2005$ or 2006$ or 2007$ or 2008$ or 2009$).em.

Characteristics of studies

Characteristics of included studies [ordered by study id].

MethodsThis was a non‐randomised comparative study. Within the context of 4 systematic reviews on the prevention of NSAID‐induced gastro‐intestinal toxicity trial authors were contacted by e‐mail (preferentially) or letter, providing a semi‐personalised information retrieval sheet.
DataContext of 4 systematic reviews on the prevention of NSAID‐induced gastro‐intestinal toxicity; 112 authors (of 139 studies) were contacted
ComparisonsE‐mail (n = 39) versus letter (n = 73) providing a semi‐personalised information retrieval sheet. However, the results were not presented separately for each approach.
OutcomesAdditional information retrieved through contact with trial authors
Costs incurred
NotesThis study was published as an abstract
Allocation concealment?UnclearNon‐randomised comparison therefore not applicable
Allocation sequence generation?UnclearNon‐randomised comparison therefore not applicable
How allocation occurred?UnclearNot reported
Attempts to balance groups?UnclearNot reported
Use of blinding?UnclearNot reported
MethodsThis was a non‐randomised comparative study. The mode of contact and response levels of authors who had been asked to provide missing or incomplete data for a systematic review on diet and exercise interventions for weight loss was examined.
DataA systematic review on diet and exercise interventions for weight loss
ComparisonsE‐mail versus letter versus both (total n = 146; sample size per study arm not reported)
OutcomesProportion of responders over time among the different modes of contact
Response rates from United States compared to other countries
NotesThis study was published as a full‐text paper
Allocation concealment?UnclearNon‐randomised comparison therefore not applicable
Allocation sequence generation?UnclearNon‐randomised comparison therefore not applicable
How allocation occurred?UnclearNot reported
Attempts to balance groups?UnclearNot reported
Use of blinding?UnclearNot reported
MethodsThis was a non‐randomised comparative study. As part of a Cochrane Review comparing the effects of inhaled corticosteroids to cromolyn, authors of all included trials were contacted to clarify methods and/or to obtain missing outcome data. Authors listed as corresponding authors were contacted by

Remaining authors were contacted if there was no response by the corresponding author
DataCochrane Review comparing the effects of inhaled corticosteroids to cromolyn; study authors of all 25 included trials were contacted to clarify methods and/or to obtain missing outcome data
ComparisonsE‐mail versus letter versus fax (total n = 25; sample size per study arm not reported)
OutcomesResponse rate
Time to response
NotesThis study was published as an abstract
Allocation concealment?UnclearNon‐randomised comparison therefore not applicable
Allocation sequence generation?UnclearNon‐randomised comparison therefore not applicable
How allocation occurred?UnclearUnclear
Attempts to balance groups?UnclearE‐mail was the preferred method of contact
Use of blinding?NoNo blinding
MethodsThis was a randomised comparison. Contact persons or authors (primary investigators) of published studies were eligible for the study if (i) the study had been identified as probably or definitely fulfilling the criteria for inclusion in a Cochrane Review, (ii) any information needed to complete the systematic review was missing from the published report, and (iii) a postal, or e‐mail, address was available for them. The reviewers should have completed assessment of studies for inclusion in the review and any data extraction.
DataRandomised trial of Cochrane Review authors where the reviewer was uncertain how first contact should be made with the investigator of a primary study which was included in the Cochrane Review in order to obtain missing information
ComparisonsSingle request for missing information (by e‐mail or surface mail) (n = 116) versus multistage approach involving pre‐notification, request for missing information and active follow‐up (n = 117)
OutcomesPrimary outcome: Amount of missing information retrieved from the investigator within 12 weeks of sending the original letter. A 4‐point ordinal scale was used:

Secondary outcomes:
The time taken to receive some or all of the requested information
Any response or acknowledgement from the investigator or someone else involved with the study or its data
Cost, in terms of postage and telephone call time
NotesThis study was called 'First Contact' and was published as an abstract. The study's website is available at http://www.mrc‐bsu.cam.ac.uk/firstcontact/index.html
Additional information was obtained from study authors
Allocation concealment?YesCentral randomisation with minimisation to attempt to balance confounders
Allocation sequence generation?UnclearCentral randomisation
How allocation occurred?UnclearRandomised comparison therefore not applicable
Attempts to balance groups?UnclearRandomised comparison therefore not applicable
Use of blinding?UnclearNot reported
MethodsThis was a randomised comparison. Authors of eligible RCTs of interventions for essential hypertension published since 1996 and forming part of a methodological systematic review were randomised to receive a mailed questionnaire with a cover letter signed by Richard Smith (RS) or Julie Milton (JM), on stationery appropriate to each. After 3 weeks non‐responders were sent a questionnaire by recorded mail, with the same signatory. After a further 5 weeks, JM attempted to telephone non‐responders, telling authors randomised to RS as signatory that she was calling on his behalf.
DataAuthors of 144 eligible RCTs of interventions for essential hypertension published since 1996 and forming part of a methodological systematic review
ComparisonsUsing a well know signatory (n = 96) versus an unknown researcher (n = 48) on the cover letter of a mailed questionnaire
OutcomesResponse of clinical trial investigators to requests for information
NotesThis study was published as an abstract
Allocation concealment?UnclearAllocation was performed by an independent statistician
Allocation sequence generation?UnclearNot reported
How allocation occurred?UnclearRandomised comparison therefore not applicable
Attempts to balance groups?UnclearRandomised comparison therefore not applicable
Use of blinding?UnclearNot reported
MethodsThis was a non‐randomised comparative study. Over 4 years, for each of 5 systematic reviews of drugs at CCOHTA, 2 different approaches were used to seek unpublished information from the drug industry. With the first approach, a general request letter was sent. With the second approach, unpublished studies were identified ahead of time via handsearching of conference abstracts, review articles and bibliographies of included studies plus electronic searches of BIOSIS Previews. A Google search was also run. Unpublished studies were identified and industry was asked to provide further specific detail.
DataFive systematic reviews of drugs at CCOHTA. Number of trials not reported.
ComparisonsIdentifying unpublished studies ahead of time and then asking industry to provide further specific detail versus general request by letter for unpublished studies
OutcomesUnpublished information obtained from the drug industry
NotesThis study was published as an abstract
Allocation concealment?UnclearNon‐randomised comparison therefore not applicable
Allocation sequence generation?UnclearNon‐randomised comparison therefore not applicable
How allocation occurred?UnclearNot reported
Attempts to balance groups?UnclearNot reported
Use of blinding?UnclearNot reported

CCOHTA: Canadian Co‐ordinating Office for Health Technology Assessment NSAID: non‐steroidal anti‐inflammatory drug RCT: randomised controlled trial

Characteristics of excluded studies [ordered by study ID]

StudyReason for exclusion
Study that looked at contacting authors to obtain missing data. It was excluded because there was no comparison of different methods of obtaining missing data.
Study that looked at use of the internet to identify unpublished studies. It was excluded because there was no comparison of different methods of obtaining unpublished studies.
Study that looked at contacting authors to obtain missing data. It was excluded because there was no comparison of different methods of obtaining missing data.
Study that looked at surveying content experts to identify unpublished studies. It was excluded because there was no comparison of different methods of obtaining unpublished studies.
This study contacted authors of studies for individual patient data using letters. It was excluded because there was no comparison of different methods of obtaining missing data.
Study that looked at contacting authors to obtain missing data. It was excluded because there was no comparison of different methods of obtaining missing data.
Study that looked at contacting authors to obtain missing data. It was excluded because there was no comparison of different methods of obtaining missing data.
Study that looked at surveying content experts to identify unpublished studies. It was excluded because there was no comparison of different methods of obtaining unpublished studies.
Primary authors and/or sponsoring pharmaceutical companies of studies in general surgery which might contain colorectal patients were contacted per mail, e‐mail and/or personal contacts. The responses from the 3 methods were not compared.

Contributions of authors

Taryn Young (TY) developed and Sally Hopewell (SH) provided comments on the protocol. Both authors reviewed the search results, selected potential studies for inclusion, worked independently to do a formal eligibility assessment and then extracted data from included studies. TY drafted the review with input from SH.

Sources of support

Internal sources.

  • South African Cochrane Centre, South Africa.
  • UK Cochrane Centre, NHS Research & Development Programme, UK.

External sources

  • No sources of support supplied

Declarations of interest

None known.

References to studies included in this review

Brown 2003 {published data only}.

  • Brown T, Hooper L. Effectiveness of brief contact with authors . XI Cochrane Colloquium: Evidence, Health Care and Culture; 2003 Oct 26‐31; Barcelona, Spain . 2003.

Gibson 2006 {published data only}

  • Gibson CA, Bailey BW, Carper MJ, Lecheminant JD, Kirk EP, Huang G, et al. Author contacts for retrieval of data for a meta‐analysis on exercise and diet restriction . International Journal of Technology Assessment in Health Care 2006; 22 ( 2 ):267‐70. [ PubMed ] [ Google Scholar ]

Guevara 2005 {published data only}

  • Guevara J, Keren R, Nihtianova S, Zorc J. How do authors respond to written requests for additional information? . XIII Cochrane Colloquium; 2005 Oct 22‐26; Melbourne, Australia . 2005.

Higgins 1999 {published data only}

  • Higgins J, Soornro M, Roberts I, Clarke M. Collecting unpublished data for systematic reviews: a proposal for a randomised trial . 7th Annual Cochrane Colloquium Abstracts, October 1999 in Rome . 1999.

Milton 2001 {published data only}

  • Milton J, Logan S, Gilbert R. Well‐known signatory does not affect response to a request for information from authors of clinical trials: a randomised controlled trial . 9th Annual Cochrane Colloquium Abstracts, October 2001 in Lyon . 2001.

Shukla 2003 {published data only}

  • Shukla V. The challenge of obtaining unpublished information from the drug industry . XI Cochrane Colloquium: Evidence, Health Care and Culture; 2003 Oct 26‐31; Barcelona, Spain . 2003.

References to studies excluded from this review

Bohlius 2003 {published data only}.

  • Bohlius J, Langensiepen S, Engert A. Data hunting: a case report . XI Cochrane Colloquium: Evidence, Health Care and Culture; 2003 Oct 26‐31; Barcelona, Spain . 2003.

Eysenbach 2001 {published data only}

  • Eysenbach G, Tuische J, Diepgen TL. Evaluation of the usefulness of Internet searches to identify unpublished clinical trials for systematic reviews . Medical Informatics and the Internet in Medicine 2001; 26 ( 3 ):203‐18. [ PubMed ] [ Google Scholar ]
  • Eysenbach G, Tuische J, Diepgen TL. Evaluation of the usefulness of internet searches to identify unpublished clinical trials for systematic reviews . Chinese Journal of Evidence‐Based Medicine 2002; 2 ( 3 ):196‐200. [ PubMed ] [ Google Scholar ]

Hadhazy 1999 {published data only}

  • Hadhazy V, Ezzo J, Berman B. How valuable is effort to contact authors to obtain missing data in systematic reviews . 7th Annual Cochrane Colloquium Abstracts, October 1999 in Rome . 1999.

Hetherington 1987 {published data only}

  • Hetherington J. An international survey to identify unpublished and ongoing perinatal trials [abstract] . Controlled Clinical Trials 1987; 8 :287. [ Google Scholar ]
  • Hetherington J, Dickersin K, Chalmers I, Meinert CL. Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians . Pediatrics 1989; 84 ( 2 ):374‐80. [ PubMed ] [ Google Scholar ]

Kelly 2002 {published data only}

  • Kelley GA, Kelley KS, Tran ZV. Retrieval of individual patient data for an exercise‐related meta‐analysis . Medicine & Science in Sports & Exercise 2002; 34 ( 5 (Suppl 1) ):S225. [ Google Scholar ]

Kelly 2004 {published data only}

  • Kelley GA, Kelley KS, Tran ZV. Retrieval of missing data for meta‐analysis: a practical example . International Journal of Technology Assessment in Health Care 2004; 20 ( 3 ):296–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]

McGrath 1998 {published data only}

  • McGrath J, Davies G, Soares K. Writing to authors of systematic reviews elicited further data in 17% of cases . BMJ 1998; 316 :631. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Reveiz 2004 {published data only}

  • Reveiz L, Andres Felipe C, Egdar Guillermo O. Using e‐mail for identifying unpublished and ongoing clinical trials and those published in non‐indexed journals . 12th Cochrane Colloquium: Bridging the Gaps; 2004 Oct 2‐6; Ottawa, Ontario, Canada . 2004.
  • Reveiz L, Cardona AF, Ospina EG, Agular S. An e‐mail survey identified unpublished studies for systematic reviews . Journal of Clinical Epidemiology 2006; 59 ( 7 ):755‐8. [ PubMed ] [ Google Scholar ]

Wille‐Jorgensen 2001 {published data only}

  • Wille‐Jorgensen. Problems with retrieving original data: is it a selection bias? . 9th Annual Cochrane Colloquium, Lyon . October 2001.

Additional references

Chapman 2010.

  • Chapman S, Eisinga A, Clarke MJ, Hopewell S. Passport to publication? Do methodologists publish after Cochrane Colloquia? . Joint Cochrane and Campbell Colloquium . 2010 Oct 18‐22; Keystone, Colorado, USA. Cochrane Database of Systematic Reviews, Supplement 2010; Suppl: 14.

Greenhalgh 2005

  • Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources . BMJ 205; 331 ( 7524 ):1064‐5. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Higgins 2009

  • Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.2 [updated September 2009]. The Cochrane Collaboration, 2009 . Available from www.cochrane‐handbook.org . The Cochrane Collaboration, 2008. Available from www.cochrane‐handbook.org.

Hopewell 2007

  • Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta‐analyses of randomized trials of health care interventions . Cochrane Database of Systematic Reviews 2007, Issue 2 . [DOI: 10.1002/14651858.MR000010.pub3] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Horsley 2011

  • Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews . Cochrane Database of Systematic Reviews 2011, Issue 8 . [DOI: 10.1002/14651858.MR000026.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Kirkham 2010

  • Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smith R, et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews . BMJ 2010; 340 :c365. [ PubMed ] [ Google Scholar ]

Lefebvre 2008

  • Lefebvre C, Manheimer E, Glanville J on behalf of the Cochrane Information Retrieval Methods Group. Chapter 6: Searching for studies. In: Higgins JPT, Green S (eds). Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008 . Available from www.cochrane‐handbook.org .

Scherer 2007

  • Scherer RW, Langenberg P, Elm E. Full publication of results initially presented in abstracts . Cochrane Database of Systematic Reviews 2007, Issue 2 . [DOI: 10.1002/14651858.MR000005.pub3] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith GD, Egger M. Meta‐analysis. Unresolved issues and future developments . BMJ 1998; 316 ( 7126 ):221‐5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases . Health Technology Assessment 2000; 4 ( 10 ):1‐115. [ PubMed ] [ Google Scholar ]

Sterne 2008

  • Sterne JAC, Egger M, Moher D (editors). Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S (editors). Cochrane Handbook of Systematic Reviews of Interventions. Version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008 . Available from www.cochrane‐handbook.org .

Holocaust-Era Assets

National Archives Logo

Unpublished Research Papers

Unpublished Research papers, relating to Holocaust-Era Assets, made available online

  • Berenbaum, Michael. Testimony before the Nazi War Criminals Interagency Working Group , June 24, 1999.
  • Bradsher, Greg. Archivists, Archival Records, and Looted Cultural Property Research . Paper presented at the Vilnius International Forum on Holocaust-Era Looted Cultural Assets, Lithuania, October 3, 2000.
  • Bradsher, Greg. Turning history into justice: the search for records relating to Holocaust-Era Assets at the National Archives . Paper given at the Society of American Archivists, Pittsburgh, PA, August 27, 1999.
  • Kleiman, Miriam. My search for "GOLD" at the National Archives . Paper given at the Society of American Archives, Pittsburgh, PA, August 27, 1999.
  • Marchesano, Louis. Classified Records, Nazi Collecting, and Looted Art: An Art Historian's Perspective . Paper delivered to the Nazi War Criminal Records Interagency Working Group at the Simon Wiesenthal Center, Los Angeles, June 24, 1999.
  • Rickman, Gregg. The Truth Shall Set You Free: The Archives and the Swiss Bank . Paper delivered at the Society of American Archivists, Pittsburgh, PA, August 27, 1999. Rickman is scheduled to discuss his new book, Swiss Banks and Jewish Souls, at the National Archives Author Lecture and Booksigning event on September 9, 1999.
  • Sullivan, Steve. Marta's List: The Pursuit of Holocaust Survivors' Lost Insurance Claims .
  • Wolfe, Robert. A Brief Chronology of the National Archives Captured Records Staff

Symposium participants are invited to send their papers, electronically or in hardcopy, to [email protected] or to Lida Churchville National Archives Library, Rm. 2380 8061 Adelphi Rd, College Park, MD 20740

Banner

APA Style Examples

  • Books/eBooks
  • Gov't/legal

Unpublished examples

  • Stats/Figures
  • Ask a Librarian
  • Class documents
  • Interview/letter/email
  • How to cite an online course or MOOC
  • How to cite PowerPoint slides or lecture notes

PERSONAL COMMUNICATION

IN TEXT 

(Communicator, personal communication, Date of communication ) .

L. Mardis (personal communication, July 29, 2019) reported that the library's guides underwent usability testing.

(L. A. Mardis, personal communication, January 22, 2020 ) .

(L. A. Mardis, class handouts, January 21, 2020).

REFERENCE LIST

[ APA Citation Example for an Interview ]

Mardis, L. A. (2018, October 19). Social media success in academic libraries [Interview]. Maryville, MO: Northwest

Missouri State University.

(For more examples, see p. 340 of the 7th edition)

[ APA Citing Example - Test ]

Goldberg, I. K. (2003).    Screening for Bipolar Spectrum Disorders     [Measurement instrument]. http://psychiatryassociatespc.com/doc/Goldberg's_bipolar_screening_scale.pdf

     IN TEXT 

In this study, Goldberg's (2003) Screening for Bipolar Spectrum Disorders was used to identify whether individuals were most likely suffering from major (unipolar) depression.  

  • << Previous: AI
  • Next: Stats/Figures >>
  • Last Updated: Sep 6, 2024 1:40 PM
  • URL: https://libguides.nwmissouri.edu/apa














| 800 University Dr. Maryville, MO 64468 | 660.562.1193

Cochrane Methods Information Retrieval

Searching for unpublished studies..

A consortium consisting of York Health Economics Consortium and the Cochrane Information Retrieval Methods Group has looked into the issue of searching for unpublished studies and obtaining access to unpublished data and has produced the following report and bibliography:

Arber M, Cikalo M, Glanville J, Lefebvre C, Varley D, Wood H. Annotated bibliography of published studies addressing searching for unpublished studies and obtaining access to unpublished data. York: York Health Economics Consortium; 2013.

This work was a sub-project of a larger project entitled “Searching for unpublished trials using trials registers and trials web sites and obtaining unpublished trial data and corresponding trial protocols from regulatory agencies”.

Other outputs of this project include:

Schroll, JB, Bero, L, Gotzsche, P. Searching for unpublished data for Cochrane reviews: Cross sectional study.  BMJ 2013;346:f2231

Wolfe, N, Gotzsche, PC and Bero, L.  Strategies for obtaining unpublished drug trial data:  A qualitative interview study.  Systematic Reviews. 2013; 2:31. http://www.systematicreviewsjournal.com/content/2/1/31

The project was a collaboration between the San Francisco Branch of the United States Cochrane Center, Nordic Cochrane Centre, Cochrane Acute Respiratory Infections Group, York Health Economics Consortium and the Cochrane Information Retrieval Methods Group. This sub-project was undertaken by staff of York Health Economics Consortium and Carol Lefebvre of Lefebvre Associates Ltd, for which some funding was provided by the Cochrane Collaboration under the Methods Innovation Funding initiative.

We thank the authors for allowing us to link to the full text of the report from this site.

10.3.2  Including unpublished studies in systematic reviews

Publication bias clearly is a major threat to the validity of any type of review, but particularly of unsystematic, narrative reviews. Obtaining and including data from unpublished trials appears to be one obvious way of avoiding this problem.  Hopewell and colleagues conducted a review of studies comparing the effect of the inclusion or exclusion of ‘grey’ literature (defined here as reports that are produced by all levels of government, academics, business and industry in print and electronic formats but that are not controlled by commercial publishers) in meta-analyses of randomized trials (Hopewell 2007b) .  They included five studies (Fergusson 2000, McAuley 2000, Burdett 2003, Hopewell 2004) , all of which showed that published trials had an overall greater intervention effect than grey trials. A meta-analysis of three of these studies suggested that, on average, published trials showed a 9% larger intervention effect than grey trials (Hopewell 2007b) .

The inclusion of data from unpublished studies can itself introduce bias. The studies that can be located may be an unrepresentative sample of all unpublished studies. Unpublished studies may be of lower methodological quality than published studies: a study of 60 meta-analyses that included published and unpublished trials found that unpublished trials were less likely to conceal intervention allocation adequately and to blind outcome assessments (Egger 2003). In contrast, Hopewell and colleagues found no difference in the quality of reporting of this information (Hopewell 2004).

A further problem relates to the willingness of investigators of located unpublished studies to provide data. This may depend upon the findings of the study, more favourable results being provided more readily. This could again bias the findings of a systematic review. Interestingly, when Hetherington et al., in a massive effort to obtain information about unpublished trials in perinatal medicine, approached 42,000 obstetricians and paediatricians in 18 countries they identified only 18 unpublished trials that had been completed for more than two years (Hetherington 1989) .

A questionnaire assessing the attitudes toward inclusion of unpublished data was sent to the authors of 150 meta-analyses and to the editors of the journals that published them (Cook 1993). Researchers and editors differed in their views about including unpublished data in meta-analyses. Support for the use of unpublished material was evident among a clear majority (78%) of meta-analysts while journal editors were less convinced (47%) (Cook 1993).  This study was recently repeated, with a focus on the inclusion of grey literature in systematic reviews, and it was found that acceptance of inclusion of grey literature has increased and, although differences between groups remain (systematic review authors: 86%, editors: 69%), they may have decreased compared with the data presented by Cook et al. (Tetzlaff 2006).

Reasons for reluctance to include grey literature included the absence of peer-review of unpublished literature. It should be kept in mind, however, that the refereeing process has not always been a successful way of ensuring that published results are valid (Godlee 1999) . The team involved in preparing a Cochrane review should have at least a similar level of expertise with which to appraise unpublished studies as a peer reviewer for a journal. On the other hand, meta-analyses of unpublished data from interested sources are clearly a cause for concern.

Banner

MLA Citation Guide (9th Edition): Unpublished Manuscript or Paper

  • Advertisements
  • Audio Materials
  • Books, Ebooks, & Book Chapters
  • ChatGPT or Other Generative AI
  • Class Materials (Notes, Slides, & Recordings)
  • Creative Commons Works
  • Encyclopedias, & Dictionaries
  • Images, Charts, Graphs, & Tables
  • Journal Articles
  • Magazine Articles
  • Newspaper Articles
  • Personal Communications (Interviews, Emails, etc.)
  • Religious Works
  • Social Media
  • Unpublished Manuscript or Paper
  • Websites (includes documents/PDFs posted online)
  • When Information Is Missing
  • When a Work Is Quoted in Another Source
  • Works in Another Language / Translations
  • Permalinks, URLs, & DOIs!
  • Quoting vs. Paraphrasing
  • Works Cited & Paper Format
  • Citation Tools

Unpublished Manuscripts or Papers

  • Student Paper (including your own!)

Unpublished Manuscript

Author. “Title of Manuscript/Document.”. date of composition (at least year), along with "the name and location of the library, research institution, or personal collection housing the material."

(Last Name Page Number)

 

Student Paper (including if you are citing your own previous work)

Author. “Title of the Paper.” date of composition. Name of the Course, Name of the Institution, type of work (optional).

: this does not extend to MA or PhD theses, which are public-facing works. instead.

(Last Name Page Number)

 

  • << Previous: Statistics
  • Next: Videos >>
  • Last Updated: Aug 1, 2024 10:42 AM
  • URL: https://owhlguides.andover.edu/mla

ACAP

ACAP LEARNING RESOURCES

Reference in APA 7

  • Printable Guides & Sample Papers
  • Headings & Page Order
  • ACAP Presentation Requirements This link opens in a new window
  • APA Style Guidelines, Blog & Socials
  • Paraphrasing
  • Time Stamps, Verbatim, Transcripts & Personal Comms
  • Secondary Sources
  • Tables & Figures
  • Missing, Same, Repeated, Multiples, Parts & Abbreviations
  • Reference List Elements
  • Formatting the Reference List
  • DOIs, URLs & Hyperlinks
  • Missing Information
  • Annotated Bibliographies
  • Edited, Republished & Translated Books
  • Reference Works
  • Diagnostic Manuals (DSM & ICD)
  • Religious & Ancient Works
  • Newspaper Articles
  • Conferences & Theses
  • Reports, Policies & Grey Literature
  • YouTube & Other Streaming
  • Podcasts, TV & Radio
  • Transcripts
  • Artwork & Images
  • Social Media
  • Legislation
  • Standards & Patents
  • Unpublished Works
  • Statistics, Tests & Data Sets
  • Generative Artificial Intelligence

Reference Elements: Unpublished & Informally Published Material

Author, a. a., & author, b. b. (year). title of work in italics [description of unpublished manuscript]. department name, university name. https://xxxxxx, author, a. a., & author, b. b. (year). title of work in italics  (publication no. ###). name of database or archive. https://doi.org/xxxxxx.

Use specific manuscript descriptions, e.g. [Unpublished manuscript]. [Manuscript in preparation]. [Manuscript submitted for publication]. Always use a DOI if the resource has one. Include a URL if there isn't a DOI available and if it resolves without authentication. 
  • REFERENCE LIST EXAMPLES
  • IN TEXT EXAMPLES

Leemans, S. J. J. & Artem, P. (2019). Proofs with stochastic-aware conformance checking: An entropy-based approach  [Unpublished manuscript]. Faculty of Science and Technology, Queensland University of Technology.  https://eprints.qut.edu.au/129860/

Winegard, B. M., Winegard, B. M., Geary, D. C., & Clark, C. J. (2018). The status competition model of cultural production . PsyArXiv.  https://doi.org/10.31234/osf.io/apw5e/

Parenthetical Style

See theorem one as follows "for any log L and model M (given as SDFAs), it holds that 0 ≤ recall(L, M) ≤ 1 and 0 ≤ precision(L, M) ≤ 1" (Leemans et al., 2019, p. 2).

In this example, the architecture of Frank Lloyd Wright is used for its functional and aesthetic qualities (Winegard et al., 2018).

Narrative Style

Leemans et al. (2019) proposes "for any log L and model M (given as SDFAs), it holds that 0 ≤ recall(L, M) ≤ 1 and 0 ≤ precision(L, M) ≤ 1" (p. 2).

Winegard et al. (2018) use the architecture of Frank Lloyd Wright house as an example for its functional and aesthetic qualities.

  • << Previous: Course Material & Unpublished
  • Next: Statistics, Tests & Data Sets >>
  • Last Updated: Mar 13, 2024 1:57 PM
  • URL: https://libguides.navitas.com/apa7

American Psychological Association

Unpublished Dissertation or Thesis References

This page contains a reference example for an unpublished dissertation or thesis.

Harris, L. (2014). Instructional leadership perceptions and practices of elementary school leaders [Unpublished doctoral dissertation]. University of Virginia.

  • Parenthetical citation : (Harris, 2014)
  • Narrative citation : Harris (2014)
  • When a dissertation or thesis is unpublished, include the description “[Unpublished doctoral dissertation]” or “[Unpublished master’s thesis]” in square brackets after the dissertation or thesis title.
  • In the source element of the reference, provide the name of the institution that awarded the degree.
  • The same format can be adapted for other unpublished theses, including undergraduate theses, by changing the wording of the bracketed description as appropriate.
  • If you find the dissertation or thesis in a database or in a repository or archive, follow the published dissertation or thesis reference examples .

Unpublished dissertation or thesis references are covered in the seventh edition APA Style manuals in the Publication Manual Section 10.6 and the Concise Guide Section 10.5

unpublished research paper

Citation Guide: How to cite UNPUBLISHED SOURCES

  • APA General Guidelines
  • Citing Common Resources
  • MLA General Guidelines
  • Author/Editor ASA Format
  • Basic ASA Rules
  • How to cite AUDIO/VISUAL MATERIALS
  • How to cite BOOKS, eBOOKS, and CHAPTERS
  • How to cite ENCYCLOPEDIAS
  • How to cite MAGAZINES
  • How to cite JOURNALS
  • How to cite NEWSPAPERS
  • How to cite PERSONAL COMMUNICATIONS
  • How to cite WEBSITES
  • In-text Citations
  • Citation Software - Zotero

Theses and Dissertations

Note :       Note number. Author First Last Name, “Title” (Type of dissertation, Location of Publisher, Year of Pub.), pages cited, URL or database (if online).

Sample Note :

      43. Afrah Daaimah Richmond, “Unmasking the Boston Brahmin: Race and Liberalism in the Long Struggle for Reform at Harvard and Radcliff, 1945-1990” (PhD diss., New York University, 2011), 211-12, ProQuest Dissertations & Theses.

Bibliography :

Author Last, First Name. “Title.” Type of Dissertation, Location of Publisher, Year of Pub. URL or database (if online).

Sample Citation :

Culcasi, Karen Leigh. “Cartographic Representations of Kurdistan in the Print Media.” Master’s Thesis, Syracuse University, 2003.

Lectures or Papers presented at a meeting

Note number. Author First Last Name, “Title” (Sponsor, Location, Year). URL or database (if online).

43. Irineu de Carvalho Filho and Renato P. Colistete, “Education Performance: Was it All Determined 100 Years Ago? Evidence from Sao Paulo, Brazil” (Paper presented at the 70th annual meeting of the Economic History Association, Evanston, IL, September 24-26, 2010). http://mpra.ub.uni-muenchen.de/24494/1/MPRA_paper_24494.pdf.

Bibliograpyy :

Author Last, First Name. “Title of Speech or lecture.” Sponsor, Location, Year. URL or database (if online).

Crane, Gregory R. “Contextualizing Early Modern Religion in a Digital World.” Lecture, Newberry Library, Chicago, September 16, 2011.

Carvalho Filho, Irineu de, and Renato P. Colistete. “Education Performance: Was it All Determined 100 Years Ago? Evidence from Sao Paulo, Brazil.” Paper presented at the 70 th annual meeting of the Economic History Association, Evanston, IL, September 24-26, 2010. http://mpra.ub.uni-muenchen.de/24494/1/MPRA_paper_24494.pdf.

  • Last Updated: Sep 3, 2024 4:21 PM
  • URL: https://utahtech.libguides.com/citationguide
  • Open access
  • Published: 02 May 2013

A protocol for a systematic review on the impact of unpublished studies and studies published in the gray literature in meta-analyses

  • Christine Schmucker 1 , 7 ,
  • Annette Bluemle 1 ,
  • Matthias Briel 2 , 3 ,
  • Susan Portalupi 1 ,
  • Britta Lang 1 ,
  • Edith Motschall 4 ,
  • Guido Schwarzer 4 ,
  • Dirk Bassler 5 ,
  • Katharina F Mueller 5 ,
  • Erik von Elm 1 , 6 &
  • Joerg J Meerpohl 1  

Systematic Reviews volume  2 , Article number:  24 ( 2013 ) Cite this article

10k Accesses

21 Citations

6 Altmetric

Metrics details

Meta-analyses are particularly vulnerable to the effects of publication bias. Despite methodologists’ best efforts to locate all evidence for a given topic the most comprehensive searches are likely to miss unpublished studies and studies that are published in the gray literature only. If the results of the missing studies differ systematically from the published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention’s effects.

As part of the OPEN project ( http://www.open-project.eu ) we will conduct a systematic review with the following objectives:

▪ To assess the impact of studies that are not published or published in the gray literature on pooled effect estimates in meta-analyses (quantitative measure).

▪ To assess whether the inclusion of unpublished studies or studies published in the gray literature leads to different conclusions in meta-analyses (qualitative measure).

Methods/Design

Inclusion criteria: Methodological research projects of a cohort of meta-analyses which compare the effect of the inclusion or exclusion of unpublished studies or studies published in the gray literature.

Literature search: To identify relevant research projects we will conduct electronic searches in Medline , Embase and The Cochrane Library ; check reference lists; and contact experts.

Outcomes: 1) The extent to which the effect estimate in a meta-analyses changes with the inclusion or exclusion of studies that were not published or published in the gray literature; and 2) the extent to which the inclusion of unpublished studies impacts the meta-analyses’ conclusions.

Data collection: Information will be collected on the area of health care; the number of meta-analyses included in the methodological research project; the number of studies included in the meta-analyses; the number of study participants; the number and type of unpublished studies; studies published in the gray literature and published studies; the sources used to retrieve studies that are unpublished, published in the gray literature, or commercially published; and the validity of the methodological research project.

Data synthesis: Data synthesis will involve descriptive and statistical summaries of the findings of the included methodological research projects.

Results are expected to be publicly available in the middle of 2013.

Peer Review reports

A meta-analyses as part of a systematic review aims to provide a thorough, comprehensive and unbiased account of the literature [ 1 , 2 ]. However, potentially important studies could be missing from a meta-analysis because of selective publication and inadequate dissemination of results. Despite methodologists’ best efforts to locate all eligible evidence for a given topic the most comprehensive searches are likely to miss unpublished studies and studies that are not commercially published and, therefore, are not indexed in respective databases (so called gray literature, such as conference abstract, dissertations, policy documents, book chapters). If the results from missing studies differ systematically from the published data, a meta-analysis may become biased with an inaccurate assessment of the intervention’s effects. For instance, positive, significant findings are more likely to be published than non-significant findings, and a meta-analysis which is based mainly on published literature may end up overestimating the efficacy of the intervention [ 3 – 5 ].

However, the impact of gray literature and unpublished studies on the conclusions of meta-analyses has not been comprehensively clarified. For example, there is some evidence that suggests that published randomized controlled trials (RCTs) tend to be larger and show an overall greater treatment effect than gray trials [ 6 ]. But the identification of relevant unpublished studies or studies published in the gray literature and their inclusion in meta-analyses can be particularly time-consuming and challenging. There is also some controversy as to whether unpublished studies and studies published in the gray literature should be included in meta-analyses because they might be incomplete and their methodological quality (validity) can be difficult to assess. A publication by Cook and colleagues in 1993 showed that 78% of authors of meta-analyses felt that unpublished studies should be included in meta-analyses compared to only 47% of journal editors [ 7 ]. Therefore, research is needed to help assess the potential implications for reviewers of not including gray literature and unpublished studies in meta-analyses of health care interventions.

In terms of the above mentioned controversies regarding the inclusion of unpublished studies and studies published in the gray literature on the results of meta-analyses, we will conduct a systematic review with the following objectives:

▪ To assess the impact of studies that were not published or only published in the gray literature on pooled effect estimates in meta-analyses (quantitative measure)

▪ To assess whether the inclusion of unpublished studies or studies published in the gray literature impacts the conclusions of meta-analyses (qualitative measure)

This systematic review will be part of the OPEN Project (To O vercome failure to P ublish n E gative fi N dings) which was developed with the goal of elucidating the scope of non-publication of studies through a series of systematic reviews. In an earlier issue of this journal (‘Systematic Reviews’), our group has already published a protocol for a systematic review which evaluates the extent of non-publication of research studies, which were approved by ethics committees, registered in trial registries or presented as conference abstracts [ 8 ].

Search methods for identification of methodological research projects

To identify the relevant research evidence we will conduct electronic literature searches in the following databases: Ovid Medline (1946 to present), Ovid Medline Daily Update, Ovid Medline in process & other non-indexed citations, Ovid Embase (1980 to present), The Cochrane Library (most current issue) and Web of Science . No language restrictions will be applied.

In addition, the bibliographies of any eligible articles identified will be checked for additional references and citation searches will be carried out for all included references using ISI Web of Knowledge.

A search strategy for the electronic literature search in Ovid Medline has already been designed with the support of a librarian/information specialist. This strategy was translated as appropriate for the other databases (for the full search strategies see Appendix A). In addition, we will contact various experts in the field for further eligible studies.

Data collection and analyses

Selection of methodological research projects.

A methodological research project will be considered eligible for inclusion in this systematic review if it reviews a cohort of meta-analyses (that is, more than one meta-analyses) that:

▪ compare pooled effect estimates of meta-analyses of health care interventions according to publication status (that is, published versus unpublished studies or gray literature) or

▪ examine whether the inclusion of unpublished studies or gray literature impacts the overall findings or conclusions of a meta-analyses

We will consider ‘published’ articles to be manuscripts that appeared in peer-reviewed journals. Our working definition of gray literature will correspond to the definitions used by the authors of eligible methodological research projects and which also conforms to the definition of ‘gray literature’ described earlier in this protocol (see Background). A meta-analysis is defined as the calculation of a summary estimate of treatment effect by pooling the results of two or more studies.

Data extraction

A specifically designed data extraction form will be developed and two reviewers will independently extract all relevant data from eligible methodological research projects. The following information will be collected:

Characteristics of the methodological research project

Baseline data (for example, author names, affiliation, language and year of publication, funding, type of report (for example, full publication, abstract))

Area of health care/medical specialty

Number of meta-analyses included

Characteristics of the meta-analyses included in the methodological research project

Type of meta-analyses (for example, individual patient data meta-analyses)

Number of studies included in meta-analyses (overall, median, range)

Number of participants included in meta-analyses (overall, median, range)

Main purpose of meta-analyses (efficacy versus safety)

Source used to retrieve unpublished studies, studies published in the gray literature and published studies

Characteristics of the studies included in the meta-analyses

Number of unpublished studies, studies published in the gray literature and published studies

Number of participant in unpublished studies, in studies published in the gray literature and in published studies

Number of statistically significant positive or negative unpublished studies, studies published in the gray literature and published studies

Type of unpublished studies (for example, RCTs, observational studies), studies published in the gray literature (for example, abstracts, dissertation, letter, book chapters) and published studies (for example, RCTs, observational studies)

Year of publication of unpublished studies, studies published in the gray literature and published studies

Language and country of unpublished studies, studies published in the gray literature and published studies

Funding source of unpublished studies, studies published in the gray literature and published studies

Type of data source in which gray, unpublished and published studies were identified

Methodological quality (for example, blinding, follow-up time, sample size calculation) of unpublished studies, studies published in the gray literature and published studies (this aspect can only be evaluated if the methodological research project provides enough information)

Assessment of validity

We will systematically consider the validity and generalizability of the identified evidence provided by each of the methodological research projects by evaluating the following aspects:

Internal validity:

Role of confounding factors: The results of published studies may differ from those of unpublished studies because of factors other than publication status, such as study design, type of participants, characteristics of the intervention, and methodological quality; in this context, did the researcher of the meta-analyses select comparison groups that were matched (for example, did the unpublished studies or studies published in the gray literature share similar aims, designs, and sample sizes as the published ones)?; if not, were suitable adjustments for potentially confounding factors made?

Definition of publication status: Are explicit criteria given to categorize or define unpublished studies, studies published in the gray literature and published studies?

Selection process: Are search criteria given to identify unpublished studies, studies published in the gray literature and published studies?

External validity (generalizability):

Did the researcher of the methodological research project select a broad-ranging sample of meta-analyses that reflect the current literature in the field of interest (for example, in terms of size, diversity of topic)?

How was the sample determined (for example, random sample)?

Did two researchers carry out data extraction independently?

Did the researchers provide a complete dataset (regarding the characteristics of the methodological research project and included meta-analyses)?

Outcome measures

The extent to which the effect estimate in a meta-analyses changes with the inclusion or exclusion of unpublished studies and gray literature ( quantitative measurement ). If possible, we will calculate a ratio of risks or odds ratios between the results of unpublished studies and studies published in the gray literature and the results of published studies and estimate the percentage change (pooled risk ratio from unpublished studies and gray literature divided by pooled risk ratio from published studies). A weighted pooled overall estimate will be calculated taking into account number of studies, participants and events.

The impact of the inclusion of unpublished studies or studies published in the gray literature on conclusions of meta-analyses (qualitative measurement). The impact will be estimated by calculating the proportion of meta-analyses which show a change in their conclusions according to publication status of the included studies; categorization will be as follows:

Change from negatively significant to positively significant

Change from inconclusive to positively significant

Change from positively significant to inconclusive

Change from negatively significant to inconclusive

Change from inconclusive to negatively significant

Change from positively significant to negatively significant

Change from not clinical relevant to clinical relevant

Change from not clinical relevant to inconclusive

Change from clinical relevant to inconclusive

Change from clinical relevant to not clinical relevant

Change from inconclusive to clinical relevant

Change from inconclusive to not clinical relevant

Significance and clinical relevance will be defined according to the definitions provided in the methodological research project.

Unit of analyses issues

The anticipated unit of analyses is the meta-analyses included in the methodological research project.

Assessment of heterogeneity

Heterogeneity for pooled outcome measures will be assessed by standard methods including Chi 2 -test and calculation of the I 2 value [ 9 ].

Assessment of reporting biases

Funnel plots will be used to assess the association between point estimates of log odds ratio (a measure of extent of association between meta-analyses’ characteristics and change in summary estimates) and a measure of precision if more than ten methodological research projects provide necessary information. Funnel plots will be visually assessed and appropriate formal statistical tests following recommendations formulated by Sterne et al . will be used to test for asymmetry [ 5 ]. In the instance of suspected reporting bias authors will be contacted.

Data synthesis

Data synthesis will involve a combination of descriptive and statistical summaries of the impact of the inclusion or exclusion of unpublished studies and gray literature on the results of meta-analyses (identified by methodological research projects).

The decision on whether or not to combine the results of the included methodological research projects will depend on the assessment of heterogeneity. Where methodological research projects will be judged to be sufficiently homogenous in their design a meta-analyses of these research projects will be carried out. The estimated ratios of unpublished and published in the gray literature only versus published treatment effects generated from each methodological evaluation will then be used to summarize the overall difference in risk ratios between unpublished and published in the gray literature only and published studies. The 95% confidence interval for the combined effect will be estimated using a random effects model.

Subgroup analyses and investigation of heterogeneity

The following subgroup analyses are planned:

On the level of the methodological research project

Number of meta-analyses included in the methodological research project

Number of participants included in the methodological research project

On the level of the meta-analyses

Number of studies (unpublished studies versus studies published in the gray literature versus both)

Number of participants included in studies (unpublished studies versus studies published in the gray literature versus both)

Design of studies (unpublished studies versus studies published in the gray literature versus both)

Source of database: gray literature (for example, conference abstracts or research letters) published in an easily accessible database versus unpublished studies for which immense efforts are required to be identified (for example, contact with pharmaceutical industry)

Type of research work (drug versus non-drug studies, clinical research versus basic research)

Area of health care

Sensitivity analyses

No sensitivity analyses are planned. However, should the instance arise where a methodological research project with doubtful eligibility is identified (as determined by individual review and analyses of the validity of the methodological research project), sensitivity analyses may be undertaken. Possible sensitivity analyses may be based on the following:

▪ Methodological quality/validity of the methodological research project; only methodological research projects with low risk of bias will be considered. A methodological research project will be of high risk of bias when the external validity and/or the validity of the included meta-analyses are doubtful.

This systematic review seeks to comprehensively synthesize the growing body of research that is related to the impact of including unpublished studies and studies published in the gray literature in meta-analyses. By considering multiple characteristics and potential confounders related to unpublished studies and studies published in the gray literature, we hope to identify sufficient evidence to conclude whether (or to what extent) inclusion of unpublished studies and studies published in the gray literature has an impact on the pooled effect estimates and the conclusions from a meta-analyses. The findings, including risk factors for unpublished studies and studies published in the gray literature, will have important implications for researchers conducting meta-analyses since they need to be informed about the impact and extent of (not) including unpublished and gray studies in meta-analyses. In addition, this systematic review in combination with the results of other systematic reviews that are part of the OPEN Project will serve to raise awareness about the impact of publication bias and the complexity of this issue. These reviews will also serve as a foundation for a recommendations workshop which will enable key members of the biomedical research community (for example, funders, research ethics committees, and journal editors) to develop future policies and guidelines to lessen the frequency of non-publication and related biases.

We acknowledge that more than half of all systematic reviews do not involve meta-analysis in their analyses. Despite the fact that our main outcomes focus on the impact of unpublished and gray studies on pooled effect estimates in meta-analyses, our findings will also be valuable for systematic reviews. It is obvious, if we find a statistical difference in the pooled effect estimates in meta-analyses, it is also likely that gray and unpublished literature impacts descriptive results of systematic reviews. Beside effect estimates, we will also evaluate differences in the methodological quality and study characteristics (such as number of participants, language or methodological quality) between unpublished, gray and published studies. These results will also be valuable for systematic reviews to appraise the potential impact of publication bias.

Search Strategy for OvidSP MEDLINE (search strategy will be adapted for other databases)

Line 10 to 14 and 16 to 35 are not shown because they are part of the search strategy for our 1 st systematic review [ 8 ] and, therefore, have no consequence for this search.

exp Publishing/sn

*publishing/

publication bias/

selection bias/

exp manuscripts as topic/

((data or finding? or information or evidence or study or studies or trial? or paper? or article? or report* or literature or work or manuscript? or abstract* or result?) adj6 (unpublish* or un-publish* or unreport* or un-report* or nonpublish* or non-publish* or nonpublicat* or non-publicat* or (publication? adj3 rate?) or "not publish*")).ti,ab.

(underreport* or under-report* or selective report* or selective publish* or selective publicat* or (final* adj2 (report* or publish* or publicat* or manuscript? or paper? or article?)) or (full? adj2 (report* or publish* or publicat* or manuscript? or paper? or article?)) or (subsequent* adj2 (report* or article? or paper? or publi* or manuscript?)) or (sub-sequent* adj2 (report? or article? or paper? or publi* or manuscript?)) or (complete* adj2 (report* or article? or paper? or publish* or publicat* or manuscript?))).ti,ab.

(bias* adj3 (publish* or publicat*)).ti,ab.

exp animals/ not humans/

meta-analysis as topic/

Guidelines as Topic/ or Practice Guidelines as Topic/

exp Clinical Trials as Topic/

meta-analysis.pt.

(guideline or practice guideline).pt.

(guideline? or metaanaly* or meta-analy* or metanaly* or meta-synthe* or metasynthe* or meta-regressi* or metaregressi*).ti,ab.

(systematic* adj3 (review* or overview*)).ti,ab.

exp Technology Assessment, Biomedical/

(health technology assessment? or HTA).ti,ab.

((implication? or impact? or influenc* or effect? or differen*) adj6 (publication bias* or unpublish* or un-publish* or unreport* or un-report* or nonpublish* or non-publish* or nonpublicat* or non-publicat* or "not publish*")).ti,ab.

((implication? or impact? or influenc* or effect? or differen*) adj6 (selective report* or selective publish* or selective publicat* or (final* adj2 (report* or publish* or publicat* or manuscript? or paper? or article?)) or (full? adj2 (report* or publish* or publicat* or manuscript? or paper? or article?)) or (subsequent* adj2 (report* or article? or paper? or publi* or manuscript?)) or (sub-sequent* adj2 (report? or article? or paper? or publi* or manuscript?)) or (complete* adj2 (report* or article? or paper? or publish* or publicat* or manuscript?)))).ti,ab.

((unpublish* or un-publish* or unreport* or un-report* or nonpublish* or non-publish* or nonpublicat* or non-publicat* or "not publish*") adj6 publish*).ti,ab.

(underreport* or under-report* or selective report* or selective publish* or selective publicat* or (final* and (report* or publish* or publicat* or manuscript? or paper? or article?)) or (full? and (report* or publish* or publicat* or manuscript? or paper? or article?)) or (subsequent* and (report* or article? or paper? or publi* or manuscript?)) or (sub-sequent* and (report? or article? or paper? or publi* or manuscript?)) or (complete* and (report* or article? or paper? or publish* or publicat* or manuscript?))).ti.

(unpublish* or un-publish* or unreport* or un-report* or nonpublish* or non-publish* or nonpublicat* or non-publicat* or "not publish*" or bias*).ti.

47 and (3 or 4)

47 and (36 or 37)

56 and (6 or 7)

*meta-analysis as topic/

*Guidelines as Topic/ or *Practice Guidelines as Topic/

47 and (58 or 59)

(6 or 7) and (3 or 8)

54 or 55 or 57 or 60 or 61

(unpublish* or un-publish* or unreport* or un-report* or nonpublish* or non-publish* or nonpublicat* or non-publicat* or "not publish*").ti,ab.

remove duplicates from 69

OPEN Consortium

Table 1 shows OPEN Consortium.

Center f or Reviews and Dissemination: Undertaking systematic reviews of research on effectiveness: CRD’s guidance for those carrying out or commissioning reviews. 2001, York: University of York

Google Scholar  

Higgins JPT, Green S: Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011]. The Cochrane Collaboration. 2011, http://www.cochrane-handbook.org ,

Song F, Eastwood A, Gilbody S, Duley L, Sutton AJ: Publication and related bias. Health Technol Assess. 2000, 4: 1e115-

McAuley L, Pham B, Tugwell P, Moher D: Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analysis. Lancet. 2000, 356: 1228-1231. 10.1016/S0140-6736(00)02786-0.

Article   CAS   PubMed   Google Scholar  

Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones DR, Lau J, Carpenter J, Rucker G, Harbord RM, Schmid CH, Tetzlaff J, Deeks JJ, Peters J, Macaskill P, Schwarzer G, Duval S, Altman DG, Moher D, Higgins JP: Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011, 343: d4002-10.1136/bmj.d4002.

Article   PubMed   Google Scholar  

Hopewell S, McDonald S, Clarke M, Egger M: Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007, 2: MR000010-

PubMed   Google Scholar  

Cook D, Guyatt GH, Ryan G: Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA. 1993, 269: 2749-2753. 10.1001/jama.1993.03500210049030.

Portalupi S, von Elm E, Schmucker C, Lang B, Motschall E, Schwarzer G, Gross IT, Scherer RW, Bassler D, Meerpohl JJ: Protocol for a systematic review on the extent of non-publication of research studies and associated study characteristics. Systematic Reviews. 2013, 2: 2-10.1186/2046-4053-2-2.

Article   PubMed   PubMed Central   Google Scholar  

Higgins JP, Thompson SG, Deeks JJ, Altman DG: Measuring inconsistency in meta-analyses. BMJ. 2003, 327: 557-560. 10.1136/bmj.327.7414.557.

Download references

Acknowledgements

We thank Patrick Oeller and Laura Cabrera for their input during development and piloting of our data extraction form.

The OPEN Project ( http://www.open-project.eu ) is funded by the European Union Seventh Framework Programme (FP7 - HEALTH.2011.4.1-2) under grant agreement n° 285453.

Author information

Authors and affiliations.

German Cochrane Center, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, 79110, Freiburg, Germany

Christine Schmucker, Annette Bluemle, Susan Portalupi, Britta Lang, Erik von Elm & Joerg J Meerpohl

Basel Institute for Clinical Epidemiology and Biostatistics, University Hospital Basel, Basel, 4031, Switzerland

Matthias Briel

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Canada

Institute of Medical Biometry and Medical Informatics, University Medical Centre Freiburg, Freiburg, 79104, Germany

Edith Motschall & Guido Schwarzer

Centre for Paediatric Clinical Studies, University Medical Center Tuebingen, Tuebingen, 72070, Germany

Dirk Bassler & Katharina F Mueller

Cochrane Switzerland, IUMSP, University Hospital Lausanne, Lausanne, 1005, Switzerland

Erik von Elm

the OPEN Consortium, Germany

Christine Schmucker

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Joerg J Meerpohl .

Additional information

Competing interests.

We declare that all authors and contributing members have no competing interests.

Authors’ contributions

JM is the lead researcher of this project. JM and CS, along with EvE and SP, developed the methodologies of the systematic review protocol and led the writing of the protocol. EM designed the search strategy. DB, MB and GS contributed significantly to the writing and revision of the protocol. All authors critically revised the protocol and read and approved the final version.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Schmucker, C., Bluemle, A., Briel, M. et al. A protocol for a systematic review on the impact of unpublished studies and studies published in the gray literature in meta-analyses. Syst Rev 2 , 24 (2013). https://doi.org/10.1186/2046-4053-2-24

Download citation

Received : 23 January 2013

Accepted : 15 April 2013

Published : 02 May 2013

DOI : https://doi.org/10.1186/2046-4053-2-24

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publication bias
  • Gray literature
  • Unpublished studies
  • Meta-analyses
  • The OPEN project

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

unpublished research paper

unpublished research paper

  • How to use this guide
  • Paraphrases and quotations
  • Types of in-text citations
  • Multiple sources
  • Unknown or anonymous authors
  • A company, organisation, or government body as the author
  • Legal cases or court decisions
  • Audio-visual media
  • Formatting guidelines
  • DOIs and URLs
  • Order of entries
  • Secondary sources
  • Single author
  • Two authors
  • Three to twenty authors
  • Twenty-one or more authors
  • Article published in a future online publication (in press)
  • Article from Cochrane Database of Systemic Reviews
  • No pagination
  • Three or more authors
  • Different edition
  • Chapter in an edited book
  • Multiple works published in different years by the same author
  • Multiple works published in the same year by the same author
  • Authored book with editor credited on the front cover
  • Encyclopaedia or dictionary
  • Book with anonymous or no author
  • Translated work
  • Webpage or document from a website
  • Social media
  • Newspaper and periodical articles
  • Government report
  • Australian Bureau of Statistics (ABS) report
  • Parliamentary debates (Hansard)
  • Parliamentary Act or Bill
  • Parliamentary Act or bill
  • Unpublished or raw dataset
  • Industry or Non-Government Organisation (NGO) report
  • Press Release
  • Film or movie
  • Television episode
  • Online video
  • Conference papers and presentations
  • Podcast or vodcast
  • Music scores
  • Archival sources
  • Course material or lecture notes
  • Personal communication
  • Software and Apps
  • UOW APA7 Guide (PDF) This link opens in a new window
  • Referencing & Citing Guide Main Page

In text citation

Paraphrase .

(Author, Year published)

Author (Year published)

(Jerome, 1995)

Roamli (2006)

Reference list

Author’s last name, Author’s initials. (Year published). Dataset title [format]. Publisher. URL or DOI

  • Format is “dataset”.
  • Dataset title is not italicised as it is not published.

Roamli, L. I. (2006). Wood duck population in Australia [dataset]. NSW Avian Society. www.nswaviansociety.com.au/woodduck/nsw

Jerome, O. J. (1995). Sample population of macroscopic insects in Northern Victorian farmlands [dataset]. Victorian Entomologist Society. www.vicentomologistsociety.vic.gov.au

  • Previous: Dataset
  • Next: Industry or Non-Government Organisation (NGO) report
  • Last Updated: Feb 22, 2024 12:03 PM
  • URL: https://uow.libguides.com/uow-apa7

Insert research help text here

LIBRARY RESOURCES

Library homepage

Library SEARCH

A-Z Databases

STUDY SUPPORT

Academic Skills Centre

Referencing and citing

Digital Skills Hub

MORE UOW SERVICES

UOW homepage

Student support and wellbeing

IT Services

unpublished research paper

On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.

unpublished research paper

Copyright & disclaimer | Privacy & cookie usage

institution logo

  • Introduction
  • Formatting Your Paper
  • In-Text Citations
  • Books and eBooks
  • Business Reports
  • Conference Presentations and Publications
  • Dissertations and Theses
  • Government Documents, Statutes, and Court Cases
  • Images and Advertisements
  • Missing Information
  • Multiple Authors
  • Personal Communications (E-mails, Interviews, etc.)
  • Previous Coursework
  • Religious Works
  • Secondary Source/Indirect Citation (as cited in)
  • Social Media
  • Video and Audio
  • Avoiding Plagiarism
  • Annotated Bibliographies
  • Get Help Now

APA 7th Edition Citation Guide Previous Coursework

How to cite yourself.

When citing a paper that you wrote for a previous class, consider yourself as the author and your previous course work as an unpublished paper. Include [Unpublished manuscript] in brackets after the title.

Reference Page Format:

Author, (year written). Title [Unpublished manuscript]. Institution.

Reference Page Example:

O’Toole, T. (2019).  An analysis of pre-WWII leaders  [Unpublished manuscript]. Concordia University, St. Paul.  

In-text Citation Examples:

According to O’Toole (2019)... ...(O’Toole, 2019). ...(O’Toole, 2019, p. 4).

Blackboard Lectures and PowerPoints

Sources on Blackboard, such as recorded lectures and PowerPoints, are not available to people outside of your institution. If the audience of your paper is your professor and/or classmates who have access to the content, use the following examples.

If your audience is not enrolled in your course or part of your institution and therefore does not have access to the content, cite the content as a Personal Communication .

Author, A. A. (Year, Month Day). Title [Format]. Blackboard@CSP.  https://csp.blackboard.com/
Neilson, J. (2022, September 1).  What the library can do for you  [PowerPoint Slides]. Blackboard@CSP.  https://csp.blackboard.com/
According to Neilson (2022)... ...(Neilson, 2022).
  • << Previous: Personal Communications (E-mails, Interviews, etc.)
  • Next: Religious Works >>
  • Last Updated: Sep 6, 2024 11:21 AM
  • URL: https://library.csp.edu/apa

unpublished research paper

  • Find Resources

Library and Academic Support Services Concordia University, St. Paul 1282 Concordia Aveneu Saint Paul, MN 55104

Connect with us

© Concordia University, St. Paul

COMMENTS

  1. Guide to Sources for Finding Unpublished Research

    Presentations, posters, conference papers published on personal websites or research networks like ResearchGate or Mendeley, Theses and dissertations published on the web or through repositories. Unpublished research can be harder to find a number of reasons. There is no one place to look. You have to dig a little deeper.

  2. How to Cite an Unpublished Paper or Manuscript in APA Referencing

    How to Cite an Unpublished Paper in APA referencing

  3. Step 5: Unpublished Materials

    Step 5: Searching for Unpublished Articles. The publication process takes a long time—sometimes a year or more—so it's important to search for articles on your topic that have already been written but not yet published. SSRN and bepress are the best sources for unpublished articles and working papers: Social Science Research Network (SSRN ...

  4. How to cite my own submitted but not yet published work?

    How to cite my own submitted but not yet published work?

  5. Out of sight but not out of mind: how to search for unpublished

    A key challenge in conducting systematic reviews is to identify the existence and results of unpublished trials, and unreported methods and outcomes within published trials. An-Wen Chan provides guidance for reviewers on adopting a comprehensive strategy to search beyond the published literature #### Summary points Systematic reviews of randomised trials play a key role in guiding patient care ...

  6. Unpublished or informally published work

    Unpublished or informally published work - APA Referencing

  7. Searching practices and inclusion of unpublished studies in systematic

    Searching practices and inclusion of unpublished studies ...

  8. APA 7th Edition Style Guide: Unpublished Manuscripts/Informal

    These may be published in a database or freely available online or they may be unpublished. Cite unpublished dissertation or thesis (Skidmore, 2017). Skidmore, K. L. (2017). The effects of postpartum depression among young mothers who give children up for adoption (Unpublished master's thesis). Nova Southeastern University, Fort Lauderdale, FL.

  9. OATD

    You may also want to consult these sites to search for other theses: Google Scholar; NDLTD, the Networked Digital Library of Theses and Dissertations.NDLTD provides information and a search engine for electronic theses and dissertations (ETDs), whether they are open access or not. Proquest Theses and Dissertations (PQDT), a database of dissertations and theses, whether they were published ...

  10. Methods for obtaining unpublished data

    Methods for obtaining unpublished data. Monitoring Editor: Taryn Young, Sally Hopewell, and Cochrane Methodology Review Group. Stellenbosch University, Centre for Evidence‐based Health Care, Faculty of Health Sciences, PO Box 19063, TygerbergSouth Africa, 7505. UK Cochrane Centre, National Institute for Health Research, Summertown Pavilion ...

  11. INTERNET RESOURCES: Gray literature: Resources for locating unpublished

    INTERNET RESOURCES: Gray literature: Resources for locating unpublished research. by Brian S. Mathews. Gray or grey literature has long been considered the proverbial needle in the haystack. It is commonly defined as any documentary material that is not commercially published and is typically composed of technical reports, working papers ...

  12. Unpublished Research Papers

    Unpublished Research papers, relating to Holocaust-Era Assets, made available online Papers: Berenbaum, Michael. Testimony before the Nazi War Criminals Interagency Working Group, June 24, 1999. Bradsher, Greg. Archivists, Archival Records, and Looted Cultural Property Research. Paper presented at the Vilnius International Forum on Holocaust-Era Looted Cultural Assets,

  13. Research Guides: APA Style Examples: Unpublished/Not retrievable

    Find how to cite a web page, journal, book, eBook, textbook, magazine, newspaper, video, DVD, TV show, Twitter, Tweet, Instagram, Facebook, or blog post. Find how to format in-text/parenthetical citations, papers or title pages and cite when no author. Class documents/notes, Interviews/letters/emails, Surveys, AI/ChatGPT

  14. Searching for unpublished studies.

    Searching for unpublished studies. A consortium consisting of York Health Economics Consortium and the Cochrane Information Retrieval Methods Group has looked into the issue of searching for unpublished studies and obtaining access to unpublished data and has produced the following report and bibliography: Arber M, Cikalo M, Glanville J ...

  15. 10.3.2 Including unpublished studies in systematic reviews

    10.3.2 Including unpublished studies in systematic reviews. Publication bias clearly is a major threat to the validity of any type of review, but particularly of unsystematic, narrative reviews. Obtaining and including data from unpublished trials appears to be one obvious way of avoiding this problem. Hopewell and colleagues conducted a review ...

  16. MLA Citation Guide (9th Edition): Unpublished Manuscript or Paper

    Unpublished Manuscripts or Papers. Student Paper (including your own!) Author. "Title of Manuscript/Document.". date of composition (at least year), along with "the name and location of the library, research institution, or personal collection housing the material." Henderson, George Wylie. Baby Lou and the Angel Bud. 22 July 1991.

  17. APA Citation Style: Unpublished or informally published works

    Original or Unattributed Material (unpublished material in course packs) Since the only source for this material is the course pack itself, treat it as part of an anthology compiled by the instructor and published by the university. If authorship is not stated, treat it as an unauthored work. The title of the compilation is whatever is on the ...

  18. ACAP Learning Resources: Reference in APA 7: Unpublished Works

    Leemans, S. J. J. & Artem, P. (2019). Proofs with stochastic-aware conformance checking: An entropy-based approach [Unpublished manuscript].Faculty of Science and Technology, Queensland University of Technology.

  19. Unpublished Dissertation or Thesis References

    Unpublished dissertation or thesis references - APA Style

  20. Citation Guide: How to cite UNPUBLISHED SOURCES

    Theses and Dissertations. Note number. Author First Last Name, "Title" (Type of dissertation, Location of Publisher, Year of Pub.), pages cited, URL or database (if online). Sample Note: 43. Afrah Daaimah Richmond, "Unmasking the Boston Brahmin: Race and Liberalism in the Long Struggle for Reform at Harvard and Radcliff, 1945-1990" (PhD ...

  21. A protocol for a systematic review on the impact of unpublished studies

    Meta-analyses are particularly vulnerable to the effects of publication bias. Despite methodologists' best efforts to locate all evidence for a given topic the most comprehensive searches are likely to miss unpublished studies and studies that are published in the gray literature only. If the results of the missing studies differ systematically from the published ones, a meta-analysis will ...

  22. Unpublished or raw dataset

    Template: Author's last name, Author's initials. (Year published). Dataset title [format]. Publisher. URL or DOI. Format is "dataset". Dataset title is not italicised as it is not published.

  23. APA 7th Edition Citation Guide

    Include [Unpublished manuscript] in brackets after the title. Reference Page Format: Author, (year written). Title [Unpublished manuscript]. Institution. Reference Page Example: O'Toole, T. (2019). An analysis of pre-WWII leaders [Unpublished manuscript]. Concordia University, St. Paul.