Recruiting a dyadic EMA sample on Prolific

data collection
Author

Haran Sened

Published

March 13, 2024

As a researcher focused on dyadic experience sampling and neuroimaging, I haven’t had much experience using online participant recruitment platforms. Dyadic experience sampling poses a double challenge with these platforms - both the need to recruit two people who are connected in real life, and the need to keep them working through the study every day for a few weeks. With PAVE, I originally planned only to recruit a pilot sample through such a platform - Prolific in this case. However, running the pilots convinced me that this could be a viable option for a full projects and a few months later we recruited our whole first sample of over 150 dyads through the platform. While there were some bumps along the way, and we’ll only be able to understand the final quality of the sample once we’ve perfected our data cleanup procedures, we were largely happy with the results. I’m going to detail the way we went about this here, and I’m happy to provide details (see the about page for contact details). Note that this is all written about Prolific in late 2023, although I believe other platforms would have similar challenges.

Challenges

The easiest studies to run on Prolific, or indeed on any of these platforms, are short one-off studies where each participant gets a link to an online questionnaire/experiment, completes it and is done. Compared to those kinds of studies, we faced multiple challenges:

  • To recruit real dyads (romantic partners or friends), we need a way to reach both of them at once.

  • To do experience sampling, we needed to contact the same participants repeatedly

  • We wanted participants to answer 5 questionnaires a day, but participants typically don’t log into Prolific so often. While the questionnaires were served by a separate app, we needed a way to pay them on Prolific - but only if they completed enough questionnaires

  • As always with these kinds of platforms, we needed ways to verify that our participants are real people who are not giving us junk results - especially since the study paid quite a lot of money compared to the typical Prolific study (70$)

  • Prolific requires that participants are paid every 24 hours. This requires us to interact with them constantly through Prolific, even though they didn’t have much to do there.

Our approach

The way we ran the study is as follows:

  1. First, participants were directed to a Prolific Study which was an online screening questionnaire, which gave them information about the study and paid them a small amount. The questionnaire gave them information about the study and asked for the Prolific ID of a friend. It only lasted a few minutes and paid very little accordingly (although payment per hour was the same as all studies).

  2. Then, we had a second Prolific Study which was only open to the friends (using an ID whitelist). this was a similar screener that simply ensured that the friend wanted to participate in the study. This also was a short study with low payment (although payment per hour was the same as all studies).

  3. The next step was a third Prolific Study which served as our true background questionnaire, sent to dyads who completed the first two studies (using an ID whitelist). It also guided participants through installing the software (the m-path app, by KU Leuven) on their phones.

  4. Then, the main bulk of the study begun. Every day we set an identical Prolific study (think “PAVE Study daily 12-10-2023”, “PAVE study daily 12-11-2023”, …) with no requirements - participants could accept and immediately submit. We explained to participants that they would only be paid if they completed the agreed upon number of studies for the day on the phone app.

    The whitelist for each day was created semi-manually by filtering an excel file with a list of participants by completion day. In total it took about 3-5 minutes to set up the study for each day, and Prolific allows for scheduling studies in advance.

  5. We used a similar approach to give participants bonuses at the end of each full week they stayed in the study.

  6. Finally, after the last day we sent participants a followup Prolific study which also asked them to delete the app.

What went well

First of all, we managed to recruit around 300 participants in two months, and we could have done it faster if we didn’t intentionally limit recruitment rate (to make sure that we don’t waste too much money on unforeseen issues). While we haven’t looked directly at the data yet, the number of questionnaires completed, communication with participants (e.g., around various issues they were having with the software) and other indicators make us confident that data quality should be adequate.

Second, the screening questionnaires proved themselves in saving us time and money. For every dyad that met our overall target for the whole study (a 3-week EMA), we had 0.2 dyads that dropped out mid-study - often early, 0.32 dyads that completed background questionnaires, and 2.31 dyads who only completed screeners. Since the cost of screener participants was negligible, in monetary terms, that meant that only 10% of funding went to dyads who didn’t meet the overall target. In my experience this is pretty typical for this kind of study.

Third, we saved a lot of manpower by not doing phone calls to participants and handling direct payments - we simply paid Prolific lump sums and communicated with participants through the in-platform messaging system.

Fourth, using Prolific’s option (and I believe most similar platforms can do this as well) to recruit using demographic criteria helped us increase the diversity of our sample.

What went not so well

Note: I’m not discussing general difficulties with dyadic EMA studies (e.g., sometimes one partner’s participation is flaky while the other partner’s participation is excellent, leaving the researcher in a dilemma as to whether to drop such dyads from the study) although those obviously exist even when running on Prolific.

Running this kind of study through a platform means total dependence on the platform. At one point, our ability to pay participants was halted. While we found a workaround (using a different account), it disrupted communication with participants and may have lost us a few of them. It took Prolific about a week to fully restore services.

Second and more specific to dyadic studies, verifying participants’ relationship to each other was difficult. As we were fine with “friends”, defined as being in contact three times a week, we felt safe to assume that people who manage to coordinate sharing a Prolific ID and who trust each other to remain in the study were friendly enough with each other. If we were looking specifically for, e.g., romantic partners, we would have to deal with friends and acquaintances posing as partners, and I’m not sure how we could account for that.

Finally, we used Prolific’s option to filter for participants who have a friend on the system. On the one hand, that may have led us to more relevant participants. On the other hand, it severely limited the potential sample size, and it did seem like we exhausted the available sample by the time the study was over. We know for a fact that some people who have friends on the platform were not included - we recruited the first participant in each dyad using this option, but they gave us the system ID for the second participant, and some of the second participants did not have this option turned on (i.e., they were not labelled at Prolific as “has a friend on Prolific”). In hindsight, we should have definitely at least considered leaving this option off.

Some parting thoughts

In conclusion, what began as a small test ended up as an extremely useful recruitment method, although we did end up with some snags along the way. The next data collection planned for the PAVE project will not use these methods for technical reasons (it involves neuroimaging), which will allow us to compare the data obtained from both methods and say something about quality.

It would be a game-changer if one of these platforms had dyadic studies and/or EMA studies as a first class study type. Having dyadic studies as a core study type would allow recruiting both dyad members together, with the platform making sure that they both approve behind the scenes without requiring clumsy screening. Platforms could also do some verification of dyadic status (e.g., ask to see a marriage license to confirm marriage; ask to see IDs with the same address to confirm cohabiting).

Having EMA studies as a core study type could include, for example, automatically increasing each participants’ potential payment along a schedule, without the need to create a new study every 24 hours. One could imagine approving an initial payment and an addition for each study day, and allowing the researcher to approve participation up until a specific day, which would authorize payment (e.g., a participant starts a 3-week study on January 1st that pays 2$ a day, after a week the researcher would check that they completed all surveys up until January 5th and approve their participation up until that day, allowing the platform to pay them 10$.)

Hopefully, some of these online research platforms pick up the mantle.