How to streamline research funding applications in Canada

Diego Macrini
4 min readOct 5, 2017

Funding application take researchers away from their work, and only a few lucky ones can say that it was worth it. There is a better way.

Let’s imagine for a second that the following was the way that Canadian researchers applied for funding. A researcher writes a funding application, has it reviewed internally in her institution, and once it is approved, submits it to the intended funding agency, all with just a few clicks. This is not the way things work today because the software pieces of agencies and institutions cannot talk with one another. The good news is that now, for the first time, all the technological barriers that made this communication difficult in the past have been overcome.

Why should granting agencies open their systems

Funding competitions are becoming harder and harder to win. Often, only 3% of applications receive funding. However, 100% of the applicants spent valuable time writing the applications and having them reviewed internally and externally. For example, Canada has over 200,000 researchers who apply for funding several times per year. That is clearly a lot of time spent by applicants and by reviewers. So, if we can make this process more efficient, it will really advance the objectives of the granting agencies, which are to help and promote research and its dissemination.

There is already a working solution in the Canadian research ecosystem that shows the way. It is in fact an aspect of the Canadian Common CV (CCV) that has received little attention but that is arguably its main contribution. You probably already read Jim Woodgett’s take on the CCV tragedy, but there is a side of the CCV saga that has actually worked well. I am talking about the CCV “standard” for academic data sharing across systems. The so-called CCV schema is what allows software companies like Proximify to create solutions that “talk” with the CCV website, and that understand what the CCV website expects from a submitted file.

A funding agency can do the same. That is, it can list on the web the technical specifications for each of its funding competitions. The specifications can be very rich, covering not just the application‘s data fields to fill in, but also the help text that should be shown to researchers, and the data constraints that must be respected to produce a valid application.

The proposed model would complement the current one in which funding agencies offer a public web-based interface to submit funding applications. The idea is simply to encourage professional software companies to produce alternative ways to complete funding applications, and to integrate that into the process of internal revisions that happen within institutions prior to the external submission of an application, as well as with the financial systems that manage the projects that receive funding.

What is preventing this from happening

The technology to achieve this is ready. There are good methods for publishing specifications of funding applications. The approach created by the original CCV team in 2012 shows that the private sector can fill the gaps. The software companies in the academic ecosystem are ready to take the challenge. The only missing part is to get the funding agencies on board, and start thinking about open and decentralized solution for grant managment.

The main challenge to make this a reality is the fact that the agencies are only analyzing their side of the problem, and are unaware of the full application pipeline, which starts within universities and research centres.

One example of how agencies might not see past their own needs is apparent when they talk about importing publication data from PubMed or ORCID. They usually forget that researchers begin tracking their publications as early as they submit them anywhere so that they can include them in their annual activity reports. That’s why the CCV has fields with options like “submitted”, “under review”, “accepted”, “in press”, and “published”. Researchers enter the reference early on in their CV, and then update their status over time. PubMed, ORCID, Google Scholar can be handy to get some additional data, such as page, volume, and year, once that is publicly known. But, a common scenario is to end up with many duplicate entries after importing references because the existing (incomplete) references in the CV were not merged with the new ones.

Another misconception within granting agencies is the amount of CV data that a researcher can fetch from an automatic reference or profile database. None of those systems are able to provide information on funding, awards, teaching, supervision, presentations, patents, and so much more. When we consider a complete funding application, this gets worse since applicants must provide a project description, a budget, and the academic CVs of all project members. And this must all be reviewed internally before it is submitted to a competition. Clearly, the real inefficiencies in the ecosystem cannot be solved by simply importing publication references or basic profile information.

Often, the IT teams of funding agencies have mandates that start and end within the tech space of the agencies. They are not researchers and are often not familiar with the steps that happen before an application reaches their system, or after an application is approved and enters into the financial systems of academic institutions.

If the goal is to make all researchers more productive, then we must start accepting that the public and private sectors have to share knowledge and learn from each other. The technology is ready. The savings and advantages are large and straightforward. This is the time to try innovative solutions based on proven technologies from the Canadian research ecosystem.

--

--

Diego Macrini
Diego Macrini

Written by Diego Macrini

CEO/CTO @Proximify. Specialized in research information systems.

No responses yet