In 2022, Wellcome Connecting Science, a genomic science program funded by the Wellcome Trust, organized a citizens’ jury to ask whether the U.K. government should allow scientists to edit the DNA of human embryos in order to treat serious genetic conditions. Convening a jury was a non-traditional approach to involving the public in decision-making on a complicated scientific topic that could affect public policy.
Genome editing is one example of a scientific subject that has profound societal implications. Although the technique can treat heritable genetic conditions, it is also intertwined with questions about equity, accessibility, disability rights, and privacy. The scientists who are developing genome editing and the public that could benefit from it are siloed from one another, unaware of each other’s motivations and viewpoints.
This distance between the American public and scientists has been growing. Trust in scientists to act in the public’s best interest dropped from 87 to 73 percent between 2020 and 2023, according to the Pew Research Center. Fewer people believe that science has a mostly positive effect on society: That share plummeted from 73 percent in 2019 to 57 percent in 2023. For the public to support and trust scientific research, they need to be able to engage with it. Decisions need to involve not only experts but also the people those policies intend to serve.
As scientists trained in neuroscience, genome editing, and public health, we are acutely aware of the tension between performing research within the confines of a laboratory and an increasingly alienated public that is called to trust in our work. While politicization and government under-preparedness of COVID-19 exacerbated this distrust, this isn’t the first time that trust has been breached. The Tuskegee Study and the case of Henrietta Lacks are just two examples from the 20th century that resulted in harm and have caused public distrust in scientists. We believe that deliberative approaches to democracy, such as citizens’ juries and “science courts,” have the potential to remedy this issue.
Science courts were originally proposed during the 1960s, a time not unlike ours. The 1960s and 1970s also saw a decline in the public’s faith in science due to a combination of factors: disillusionment from the Second World War, the nuclear arms race, the Vietnam War, social revolutions like the environmental movement. Scientists were no longer just disinterested truth-seekers but rather the possible cause of many of humankind’s manufactured ills. In this climate, Arthur Kantrowitz — a physicist, educator, and science policy advocate — proposed his idea of science courts in 1967. Borrowing from the U.S. court system, Kantrowitz envisioned a court that would tackle policy questions involving a core scientific issue such as food additives or nuclear power from a purely scientific perspective. Expert advocates would publicly argue the merits or demerits of the science before a judge who would render an informed decision based on the facts.
The idea became popular among science advocates and enjoyed positive media coverage, but the proposed federal science court to make highly influential decisions never materialized. However, the dream of the science court lived on.
Nearly 50 years later, the idea was resurrected in the form of a college class. Ellad Tadmor, a professor of aerospace engineering and mechanics at the University of Minnesota, began teaching an undergraduate honors course called Science Court in 2018. Tadmor’s version involves student researchers presenting a scientific issue from opposing sides to a jury of local volunteers. Topics have included federal investment in nuclear power, technology access for students, and mandatory civil service for U.S. citizens. Students research the topic extensively so that by the end of the semester-long mock trial, they can present facts from a well-informed, unbiased point of view. One juror at the 2019 nuclear power trial told us that he was receptive to and trusted the information presented because he saw the students as impartial, objective experts. The jury was empowered to deliver an informed, thoughtful decision based on which side presented the better argument.
Trust in scientists to act in the public’s best interest dropped from 87 to 73 percent between 2020 and 2023, according to the Pew Research Center.
Tadmor’s science court has been successful in the university setting, but as an academic exercise, there are no real-world consequences. What might a real-world version of a science court look like? How could a science court be used more widely as a participatory decision-making method to better communicate science to the public and hopefully engender more trust in the way science policy is crafted?
The U.S. can look at the way the U.K. has adopted participatory decision-making at scale. There, citizens’ juries are a very popular format to solicit feedback from communities, and some evidence indicates that such engagement formats have been beneficial for public support of science and technology. The juries are made up of people who have a stake in the topic under discussion and they hear from relevant experts.
In the 2022 Wellcome Connecting Science jury on human embryo editing, the jurors spent four days hearing expert presentations, discussing, and deliberating, and produced a detailed, nuanced report on circumstances and conditions they believed were necessary to allow genome editing in embryos in the end. The report reflected the opinions of people directly affected by genetic conditions because they were part of the jury. The aim of their recommendations was to inform policymakers, researchers, and the public at large. That is the possibility and promise of such democratic policymaking methods.
Speaking as scientists, we think the only way to know for sure if science courts will work here in the U.S. is to try it. Run the experiment with real people who are thinking about and are affected by real issues. Give scientists the opportunity to engage with the public effectively. Allow the public to learn the scientific details behind a policy and participate in the deliberations that ultimately affect them with the goal of dissolving the boundary between scientist and citizen.
Arik Shams, Ph.D., is a biotechnology advisor and AAAS Science and Technology Policy Fellow at the U.S. Department of State’s Office of Agriculture Policy. Views expressed here are his own.
Leana King is a Ph.D. candidate in the Cognitive Neuroanatomy Lab at the University of California, Berkeley, and a graduate fellow at the Kavli Center for Ethics, Science, and the Public.
Joy Liu, M.D., is a physician and public health professional who has previously written for ABC News, Good Morning America, and Doximity.
This article was originally published on Undark. Read the original article.