A Private Conversation in a Public Place: The Ethics of Studying “Virtual Support Groups” Now
I feel compelled to give disclaimers whenever I speak to friends or family about my research on support groups for survivors of domestic violence. I always quickly clarify the circumstances that led me to this work. I want them to know the agency in my research is one where I have personally volunteered as a support group facilitator for nearly seven years—and it’s also one where I was previously a client, giving and receiving support in groups just like those that I now lead. Anxiously, I assure others that I would never share identifiable information about the clients I serve or their experiences of abuse with any audience, for any reason, without those clients’ knowledge and consent (which I do not wish to seek). Above all, it seems crucial to express that I never imagined conducting research on this agency when I first came into contact with it. It was only after five years of volunteering that I became interested in studying support groups, and that interest proceeded from the hope that rhetoricians like myself might find new ways of lending their specialized skills to non-profit organizations.
Needless to say, these disclaimers are meant to convey that I am acting ethically in my research—or at least, that I am trying very hard to do so. Investing significant amounts of time, energy, and care back into a community that once did the same for me, I assume a deeply personal mission to “do good research without doing bad things” (Cagle 1). And according to some research ethics scholars, perhaps my choices have been acceptable. In a discussion with Heidi McKee and James Porter regarding her research on medical support groups, Laurie Cubbison opines, “the participant observer needs to establish some street cred… You really need to establish yourself in the community even before you start doing research” (Cubbison qtd. in McKee and Porter 100). Out of context, Cubbison’s statement could seem superfluous: most academics would discourage barging into a community utterly unknown to the researcher and launching a project devoid of any prior contact with potential subjects. Doing so would be deemed intrusive, arrogant, or deceitful, whereas the ability to “develop a relationship over time with participants” was once “a necessity for qualitative researchers (i.e., field research) in traditional social research” (Hall et al. 251). But importantly, what these scholars are discussing is not quite traditional research, but rather research on the Internet—in particular, on content drawn from message boards, listservs, social media posts, and the like. Though they are far easier to access than in-person groups, these Internet communities ironically raise far more ethical conundrums for some researchers who intend to study them.
Throughout the 1990s, increased access to the Internet among the general populace offered unprecedented opportunities for human connection and communication. For individuals who have endured some of the most traumatic or stigmatizing experiences known to humankind—for example, childhood sexual abuse, intimate partner violence, self-harm/suicidality, and so on—the ostensible anonymity and global scope of online communities provided an especially appealing alternative to face-to-face resources. Drawing on culturally available models of supportive communication, Internet users adopted the phrase “online support group” (or “virtual support group”) to refer to a vast range of communities and services enacted among members of various vulnerable populations. Meanwhile, eager to amplify the voices of trauma survivors and situate their experiences within broader systems of harm, scholars also began to study such communities with great enthusiasm—generally availing themselves of raw data in the form of members’ lengthy self-disclosing text posts.[1] Ethicists have expressed concern about the risks of studying online communities for about as long as such research has been conducted, yet recent work by rhetoricians indicates that we are still struggling to conceptualize “the public nature of ‘public’ data” (Buck and Ralston 2). Greatly exacerbating this struggle is, of course, the enormous gap between the rate at which “socio-technical systems” transform and the rate at which we can systematically analyze those transformations (Nissenbaum 5).
In this essay, I argue for a reexamination of Internet research on virtual support groups in light of two major socio-technical shifts in recent years: first, the significant changes in most Internet users’ relationships to video teleconference technologies (e.g., Zoom) during the COVID-19 pandemic; and second, the resulting changes to the concept of a “support group” as it is understood by vulnerable populations in a post-pandemic age. Clearly, evolving technology and social norms are greatly diversifying the range of online activities we still collectively refer to as “virtual support groups,” highlighting the need for a more nuanced analysis of these groups’ distinct modalities, the complexity of the self-concealment/exposure they afford, and their resulting epistemic potential. Driven by my experiences as a facilitator of both in-person and virtual support groups for survivors of domestic violence, I built a case study around the explosion of synchronous, video-based support groups in the United States from March 2020 onward. Specifically, I explicate several ethical quandaries that arose from one agency’s attempts to implement a Zoom-specific confidentiality policy in its support groups, showing how rapid uptake of this platform introduces new conflicts between core values that are usually compatible. Combining the apparent privacy of face-to-face group meetings with the ambiguous publicness of online communication, Zoom support groups illustrate the extent to which our understandings of “virtual support groups” have changed since scholars first started researching human subjects on the Internet—and therefore how much our ethical considerations must change, too.
Researching Internet Communities: Ongoing Ethical Debates
Most scholars would condemn infiltrating and studying a face-to-face support group without participants’ knowledge, yet for virtual communities, the temptation to do this is so strong as to warrant lengthy reflection and ethical debates. Why is this so? For many academics, researching Internet support groups is exempt from ethical review because the content of such groups is “already public” (Zimmer 313). In other words, it is open for use by anyone online—the group is easily locatable via search engines, requires no special credentials or identity verification to join, and (crucially) may be hosted on a platform whose terms of service agreement clearly states that users’ posts are accessible to the public. Collecting information shared in these groups, then, would be comparable to taking notes on conversations overheard in a “public square” (Kaufman qtd. in Zimmer 321) or radio or television show (McKee and Porter 83), and posting a message to potential subjects would be like posting a flier on a bulletin board in a community center (Carrion 444; Opel 188). On the whole, the persuasiveness—and pervasiveness—of the notion that Internet users waive their rights to privacy when using public platforms is so potent that Helen Nissenbaum has christened it “the normative ‘knock-down’ argument” (114).
Arguments that exclude internet content from the purview of Institutional Review Boards (IRBs) often function as enthymemes, resting on an unspoken assumption that anything public is “fair game” for research (McKee and Porter 2; Zimmer 323). Nonetheless, Internet research ethicists increasingly reject the “public/private dichotomy” as a basis for ethical decisions (Nissenbaum 90), holding that this binaristic view neither reflects humans’ actual perceptions of privacy nor successfully protects them against harmful research—even if said research is fully legal and IRB approved.[2] Indeed, Dawn Opel summarizes a prevailing position on institutional ethics: “[legality] is not the whole of ethical research practice, in much the same way that IRB approval does not mean that a researcher has always acted ethically” (183, emphasis in original). Annette Markham and Elizabeth Buchanan similarly problematize the term “human subject” as it is applied in/out of regulatory frameworks, rather directing scholars’ focus to concepts like “harm, vulnerability, personally identifiable information, and so forth” (6). For these scholars and others, analyzing one’s research design involves a multiplicity of factors beyond the sensitivity of information or its public/private status, and such analysis must be done “using a complex process that weigh[s] these variables contextually” (McKee and Porter 87). Not only is a study’s ethicality not evaluable through a simple binary of ethical/unethical, but it also does not exist on one single continuum of ethical/unethical, and its placement on a wide range of continua cannot be judged solely through theoretical means. Instead, the most recent version of the Association of Internet Researchers’ (AoIR) widely adopted guidelines for ethical research stresses the importance of developing methods “from the bottom up” in a “case-by-case approach” while avoiding “a priori judgments” (franzke et al. 4).
In keeping with aline shakti franzke et al.’s endorsement of “ethical pluralism” and the many divergent “judgment calls” elicited through this approach (6), AoIR’s ethical guidelines consistently embrace the idea that “ambiguity, uncertainty, and disagreement are inevitable” (Ess qtd. in franzke 6, emphasis in original; Markham and Buchanan 5).[3] Given that the Internet and its users are constantly changing, scholars cannot possibly account for the infinite number of factors that may ever affect the ethicality of Internet research—they would be shooting at a moving target. Hence, Markham and Buchanan impart that a “process approach” to ethics “highlights the researcher’s responsibility” for making decisions “within specific contexts and … a specific research project” (5). While scholars must “consult as many people and resources as possible,” it is clear that their individual values inform the harms they are willing to risk in order to produce new knowledge (Markham and Buchanan 5). In light of ample research showing online communities’ aversion to being studied (Hall et al. 250; Hudson and Bruckman 135; King 122)—as well as common-sense awareness that groups discussing “socially sensitive” topics are especially keen to limit their membership to “only others that understand, respect, and support their situation” (King 126)—it seems critical for researchers of virtual support groups to clarify “what greater benefit justifies the potential risks” of their work (Markham and Buchanan 11).
A feminist approach to Internet research helps scholars contextualize their choices at every stage of a project, empowering them to reflect on their individual standpoints while also valuing a multiplicity of other perspectives. Though it’s apparent that “There is not one single tradition of feminist history” or “discourse” (franzke et al. 64), several principles have emerged as typifying a feminist approach to Internet research. Both informing and echoing franzke et al.’s “Feminist Research Ethics” resource in the 2019 AoIR ethical guidelines, scholars have valued a feminist “ethics of care” (Cagle 7; Luka et al 22); standpoint theory and situated knowledges (Carrion 443; Luka et al. 22); maximally contextualized praxis and data (Luka et al. 26); transparency about method/ologies (Carrion 443; De Hertogh 485; Luka et al. 30); reflexivity throughout the research process (Carrion 446; Luka et al. 23); and reciprocity and beneficence towards the community one is researching (Cagle 7; De Hertogh 495; Hall et al. 250). Underlying all of these values is a determination to honor research subjects’ dignity and hold oneself accountable for any harms thereto. Realizing that the responsibility for making good judgments ultimately falls to individual researchers, feminist approaches place us in “vulnerable and often messy positions, where each researcher looks her or his own biases in the eye” (Luka et al. 31). In this process, one may be tempted to view ethical decisions as a sort of hard-won compromise between researchers and subjects; each party’s interests are assumed to contradict the other’s. Yet even if feminist scholars consciously choose to prioritize their subjects at the cost of their research, this does not mean ethical decisions are any easier to make. As revealed in the case study below, rapid changes in Internet-based research technologies are already requiring feminist scholars to reassess, not just whether/when to show beneficence to human subjects, but also which kinds of beneficence might be more imperative than others.
Case Study: Zoom Support Groups
Current scholarly discussions of Internet research often underscore—if not conclude on—the notion that ethical guidelines must evolve over time to meet new challenges presented by new conditions of the socio-technical systems we are studying. For example, Markham and Buchanan stress that the 2012 AoIR ethical guidelines were developed “in an effort to recognize and respond to the array of changing technologies and ongoing developments” (e.g., greater use of smartphones and social media) that had drastically changed the landscape of Internet-based research since the publication of the first version of the guidelines in 2002 (2). Likewise, the development of increasingly sensitive Internet search engines since the late 1990s certainly problematizes the use of exact quotations from internet communities in prior research: McKee and Porter inquire, “Did the discussants in the newsgroups in the 1980s and early 1990s envision the powerful search engine capabilities of Google and the like making their posts easily traceable?” (83). Nevertheless, few existing studies delve deeply into one specific, ongoing socio-technical transformation and its implications for ethical decision-making in the future. In what follows, I present a case study on video-based, synchronous support groups that convene via the popular video teleconferencing platform Zoom, explicating how the effects of the COVID-19 pandemic have impacted Internet users’ relationships with video teleconferencing technology and, consequently, popular understandings of the term “support group.”
Under what (if any) conditions is it ethical for a scholar to study communications that occur within a Zoom meeting for members of a vulnerable population? While there is no easy answer to this, it is certain that individual scholars’ responses will be guided in part by their perceptions of the Zoom platform. For many Internet researchers, one of the most important factors affecting the ethicality of a project is its “venue”—the specific online platform they are visiting and their beliefs about its purpose, user base, terms of service, social norms, and so on (franzke et al. 16, 18). For instance, McKee and Porter share Yukari Seko’s reflections on her research on blogs by self-harming/suicidal authors, observing that “concern about the status of a blog” strongly influences her methods (96). Seko states: “If I think of [blogs] as the letter for the editors, I don’t have to get any informed consent, but if I think of it as personal conversation, I have to get informed consent… it’s totally related to my articulation of blog” (qtd. in McKee and Porter 95-96). If a scholar perceives a publicly-accessible Zoom support group as analogous to an open Alcoholics Anonymous meeting at their local church, they might make ethical decisions that favor their right to drop in and study the group. Conversely, a scholar who perceives a Zoom support group as analogous to a group therapy session at a local mental health clinic will come to quite different conclusions. Some crucial questions for those interested in researching virtual support groups, then, must be, “What is Zoom?” and “Who or what is Zoom for?”[4]
Prior to the COVID-19 pandemic, the average American would have perceived Zoom (if at all) as a video teleconferencing tool used for professional, utilitarian purposes when an in-person meeting with one’s colleagues was unfeasible. Events on this platform probably would not have been “fair game” for academic research, if for no other reason than that opportunities to join a Zoom call you weren’t personally invited to were decidedly rare. Precious few scholarly articles had been written about Zoom, and even fewer had explored its utility in collecting qualitative data. Yet in early 2020, Internet users’ relationships to this platform seemingly transformed overnight. According to The New York Times, Zoom’s daily user base skyrocketed from ten million pre-pandemic to three hundred million in April 2020 (Isaac and Frenkel). The platform’s distinctly user-friendly design combined with its robust security features—now often credited for Zoom’s triumph over contemporary competitors such as Microsoft Teams, Google Hangouts, or Skype (Talukdar 167)—made it adaptable to a variety of new remote communication needs. In addition to enabling some individuals to work and attend school while quarantining, Zoom became a primary site of many people’s social lives. With just the click of a weblink, it suddenly became possible to join public-facing, widely attended Zoom events hosted by businesses, schools, non-profits, governments, and more any day of the week. What was once a fairly niche tool for private professional calls became, quite abruptly, a necessity for people of diverse identities to participate in public life. And public it is: even when hosts take precautions to prevent “Zoom bombing,” or disruptions from unwanted/uninvited parties, the possibility of an attendee surreptitiously recording sound, video, images, or text chats is always present. Hence, as many workers serving vulnerable populations would soon discover, it is inherent to Zoom’s features that the risk of confidentiality breaches is high and the capacity for any single meeting attendee to prevent such breaches is low.
When the COVID-19 pandemic hit the southeastern United States in March 2020, I was one of the most active facilitators in the support group program at a domestic violence agency near my university. As was the case with most non-profit organizations in this era, the staff was obliged to adapt their services into an online format with very little time or prior experience to calibrate their choices. Following global trends, they moved all support group meetings to Zoom. Given the relative accessibility of this online space and the urgency of maintaining confidentiality while working with survivors—some of whom could be in grave danger if their information is unprotected—my supervisor and I soon recognized the need to implement Zoom-specific policies. Drafting our first “Zoom Support Group Confidentiality Agreement,” an addendum to the pre-group “Participation Agreement” clients always sign, was theoretically simple. We sought to identify all possible threats to confidentiality on Zoom and specify how clients should avoid them. However, as we gained more practical experience running virtual support groups, our policies received frequent edits and expansions. They also proved difficult to enforce, highlighting unexpected tensions between confidentiality and other agency values such as empowerment and access. To capture the ethical complexity of working with this population on this platform, I offer three basic conflicts we encountered with questions we asked ourselves:
Where should group members physically be while attending meetings via Zoom?
- Are there any cases in which it is not preferred for members to attend via Zoom from their own homes?
- If a member cannot attend via Zoom from home, what alternative locations are acceptable? Are members permitted to attend meetings from their car, their workplace, their school, a park, a café/restaurant, the library, a friend or family member’s home, etc.?
- What locations are absolutely unacceptable for attending meetings via Zoom?
- Are members required to stay in the same location for the entire duration of the meeting?
- What measures should members take to ensure that their location is not under audio/video/other forms of surveillance?
Who should group members be with while attending these meetings via Zoom?
- Are there any cases in which it is not preferred for members to attend alone?
- If a member cannot attend alone, how much space and/or substance (walls, doors, etc.) should separate them from others in their environment?
- What sort of people can be nearby while members are attending meetings via Zoom? Are members permitted to attend meetings in the general vicinity of their abuser, other family members, friends, roommates, colleagues, classmates, fellow patrons, etc.?
- If a member is a caregiver for children, can they tend to those children during Zoom meetings? If yes, is there a maximum age/developmental stage after which this is not acceptable?
- What measures should members take to ensure that people in their environment cannot hear/see the meeting, including the members’ own contributions?
What audio/visual information should group members share during meetings?
- Are there any cases in which it is not preferred for members to have their cameras and microphones turned on at all times?
- If a member cannot keep their camera and microphone on at all times, is there a maximum amount of time they are permitted to have either one turned off?
- Are members permitted to obscure information about who/what is in the room with them by using a virtual or blurred background, positioning themselves against a corner or wall, playing background music/other noise, communicating solely via chat text, etc.?
- What obligation do members have to inform the group if they are attending the group under circumstances that threaten confidentiality?
- What measures should members take to ensure that their audio/visual equipment is not inadvertently exposing information they do not wish to share with the group (their full name, home address, occupation, etc.)?
Internet users’ increasingly acute control over the flow of their personal information—the high demand for which has spurred Zoom’s popularity—is disorienting in the context of virtual support groups. In earlier parlance, the phrase “virtual support groups” often signified text-based, asynchronous, and anonymous communities; conventional groups were in-person, synchronous, and comparatively vulnerable (revealing a physical self, name, current location, voice, etc.). These descriptions no longer hold for groups convening via Zoom. While attending a meeting on this platform, a user can choose to share their real-time image, sounds, background/location, non-verbal emoji “reactions,” and/or screen in addition to text posts, providing fellow attendees with far more personal information than was possible in earlier online communities. On the other hand, one can choose not to share this information, retaining much more agency to self-conceal than in traditional support groups. Zoom’s features thus empower group members to set their own terms for participating in groups, a fact that takes on special meaning vis-à-vis individuals who have experienced a profound loss of personal autonomy. The ability to toggle between various types/levels of engagement also reduces barriers to access for those who lack the ideal conditions for attending a Zoom call. For non-professional facilitators of virtual support groups, though, ethical conundrums unfortunately arise when their commitment to these core values of empowerment and access may directly undermine their commitment to confidentiality.
The questions listed above are difficult enough, but even if answered through group policies, they are quickly eclipsed by even thornier questions about each policy’s relative importance, the harm entailed in violating it, and the harm entailed in enforcing it. Put simply, someone must decide when (if ever) a group member who doesn’t follow confidentiality policies should therefore be removed from the group. Such an action is extreme, and it compels the meeting host to decide that their ethical duty to maintain confidentiality while serving a vulnerable population is more important than their duty to benefit said population by securing their access to resources. Whereas confidentiality is often a necessary condition for access to social services, permitting people to speak freely about sensitive subjects, here one of these principles must be upheld at the cost of the other. What lengths should Zoom hosts go to, then, in order to protect confidentiality? Needless to say, the ethical ideals one pursues in theory are not always effective in practice, and the considerations scholars should take while researching support groups are not the same as those guiding the work of a non-professional group facilitator. I assume that if I were attending virtual support group meetings as a scholar collecting data on human subjects, I would err on the side of confidentiality in ethical decisions. But anecdotally, it seems to me that every virtual group meeting I’ve actually attended has involved some level of deviation from our Zoom policies, yet I have never witnessed any facilitator removing a client from a group for this reason. Consciously or not, our decisions often prioritize a client’s right to benefit from the group—and moreover, the other clients’ right to benefit from their peer’s continued presence, even if their choices slightly increase the already massive potential for confidentiality breaches on Zoom.
In the absence of substantial data about the dangers of Zoom-based support groups, those who wish to study such groups will inevitably draw upon their own subjective expectations and goals to make ethical decisions. In doing so, they may hope the strictest approach to confidentiality will yield the most ethical research. Unfortunately, it isn’t clear that this is the case, particularly if such approaches require the loss of personal power or access to resources among members of a vulnerable population for the sake of as-yet-unknown gains. To ponder whether certain uses of certain Zoom features could cause harm to meeting attendees is quite different from asserting, definitively, that they do cause harm. Some will argue that it’s better to risk removing attendees who are not a threat than to risk retaining even one attendee who is a threat, and this is a viable position—but others will argue the opposite. In the years to come, perhaps those who hope to work with vulnerable populations via Zoom can look forward to the creation of a professional code of ethics for their respective fields; their challenge will be one of learning how to follow confidentiality policies for video teleconference-based research. In the meantime, our challenge is learning how to write them.
Works Cited
Augustine, Nora. “Facilitating Rhetoric: Paratherapeutic Activity in Community Support Groups.” Strategic Interventions in Mental Health Rhetoric, by Lisa Melonçon and Cathryn Molloy, 1st ed., Routledge, 2022, pp. 71–88. DOI.org (Crossref), https://doi.org/10.4324/9781003144854-7.
Boland, Joshua, et al. “A COVID-19-Era Rapid Review: Using Zoom and Skype for Qualitative Group Research.” Public Health Research & Practice, vol. 32, no. 2, 2022, pp. 1–9. DOI.org (Crossref), https://doi.org/10.17061/phrp31232112.
Buck, Amber M., and Devon F. Ralston. “I Didn’t Sign Up for Your Research Study: The Ethics of Using ‘Public’ Data.” Computers and Composition, vol. 61, Sept. 2021, pp. 1–13. DOI.org (Crossref), https://doi.org/10.1016/j.compcom.2021.102655.
Cagle, Lauren E. “The Ethics of Researching Unethical Images: A Story of Trying to Do Good Research without Doing Bad Things.” Computers and Composition, vol. 61, Sept. 2021, pp. 1–14. DOI.org (Crossref), https://doi.org/10.1016/j.compcom.2021.102651.
Carrion, Melissa. “Negotiating the Ethics of Representation in RHM Research.” Rhetoric of Health & Medicine, vol. 3, no. 4, Feb. 2021, pp. 437–48. DOI.org (Crossref), https://doi.org/10.5744/rhm.2020.4005.
De Hertogh, Lori Beth. “Feminist Digital Research Methodology for Rhetoricians of Health and Medicine.” Journal of Business and Technical Communication, vol. 32, no. 4, Oct. 2018, pp. 480–503. DOI.org (Crossref), https://doi.org/10.1177/1050651918780188.
franzke, aline shakti, et al. Internet Research: Ethical Guidelines 3.0. 2019, pp. 1–82, https://aoir.org/reports/ethics3.pdf.
Hall, G. Jon, et al. “’NEED HELP ASAP!!!’: A Feminist Communitarian Approach to Online Research Ethics.” Online Social Research: Methods, Issues & Ethics, edited by Mark D. Johns et al., Peter Lang, 2004, pp. 239–52.
Hudson, James M., and Amy Bruckman. “‘Go Away’: Participant Objections to Being Studied and the Ethics of Chatroom Research.” The Information Society, vol. 20, no. 2, Apr. 2004, pp. 127–39. DOI.org (Crossref), https://doi.org/10.1080/01972240490423030.
Isaac, Mike, and Sheera Frenkel. “Zoom’s Biggest Rivals Are Coming for It.” The New York Times, 24 Apr. 2020, https://www.nytimes.com/2020/04/24/technology/zoom-rivals-virus-facebook-google.html.
King, Storm A. “Researching Internet Communities: Proposed Ethical Guidelines for the Reporting of Results.” The Information Society, vol. 12, no. 2, June 1996, pp. 119–28. DOI.org (Crossref), https://doi.org/10.1080/713856145.
Luka, Mary Elizabeth, et al. “A Feminist Perspective on Ethical Digital Methods.” Internet Research Ethics for the Social Age: New Challenges, Cases, and Contexts, edited by Michael Zimmer and Katharina Kinder-Kurlanda, Peter Lang, 2017, pp. 21–36.
Markham, Annette, and Elizabeth Buchanan. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). Association of Internet Researchers, 2012, pp. 1–19, https://aoir.org/reports/ethics2.pdf.
McKee, Heidi A., and James E. Porter. The Ethics of Internet Research: A Rhetorical, Case-Based Process. Peter Lang, 2009.
Nissenbaum, Helen Fay. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press, 2010.
Opel, Dawn S. “Ethical Research in ‘Health 2.0’: Considerations for Scholars of Medical Rhetoric.” Methodologies for the Rhetoric of Health & Medicine, edited by Lisa Melonçon and J. Blake Scott, Routledge, 2018, pp. 176–94.
Talukdar, Pooja. “Three Is a Crowd: Is the Boom in Zoom Mediation Piercing the Confidentiality Bubble?” American Journal of Mediation, vol. 14, 2021, pp. 151–80.
Zimmer, Michael. “‘But the Data Is Already Public’: On the Ethics of Research in Facebook.” Ethics and Information Technology, vol. 12, no. 4, Dec. 2010, pp. 313–25. DOI.org (Crossref), https://doi.org/10.1007/s10676-010-9227-5.
End Notes
[1] Indeed, as is noted in my own autoethnographic research on support groups, studies of web-based communities may be overrepresented in the current scholarly literature precisely due to the comparative practical and ethical difficulties of studying a traditional (confidential, closed membership, face-to-face) support group (Augustine 74).
[2] For further discussion of the public/private binary construct (and limitations thereof), see Buck and Ralston 3; De Hertogh 493; Hudson and Bruckman 129; King 126; Markham and Buchanan 6; McKee and Porter 77; Opel 181.
[3] McKee and Porter speculate, for instance, that even if an academic community’s own Internet posts were being dissected in unflattering research, “Some AoIR researchers who are staunch advocates of a free use policy will no doubt stand by their convictions, swallow hard, and say, … ‘the researcher has the right to do that’” (McKee and Porter 9).
[4] Naturally, a speaker’s choice of whether to refer to what happens on Zoom as “meetings,” “calls,” “sessions,” “e-conferences,” or so forth is an indicator of their perceptions of this platform. Pooja Talukdar, for example, uses all of these terms over the course of her analysis of Zoom-based legal mediation services.
[5] For a “rapid review” of recent studies on the use of video teleconferencing platforms in qualitative group research (2015-2020), see Boland et al. (1).