Skip to main content

Perspectives on Engineered Catalyst Design and Forming

This is the text version for the Addressing Rigor and Reproducibility in Thermal, Heterogeneous webinar.

Erik Ringle, National Renewable Energy Laboratory: Well, again, hello, everyone. And welcome to today's webinar Addressing Rigor and Reproducibility and Thermal, Heterogeneous Catalysis presented by the Chemical Catalysis for Bioenergy Consortium or ChemCatBio.

I'm Erik Ringle, the communications lead for the consortium. Before I introduce our speakers today, I'd like to cover some housekeeping items so you know how to participate and learn more about the consortium. ChemCatBio has several resources available on our website chemcatbio.org that I'd like to make you aware of. You can find tools and capabilities, publications, webinar recordings, interactive technology briefs, and more on this website. We regularly update this site, so be sure to bookmark it. And keep an eye on the news section on the homepage for notices on ChemCatBio-sponsored events and opportunities like this one today. You can also navigate to our popular tools from the website, including the Catalyst Property Database, CatCost, the Surface Phase Explorer, and many more.

Another great way to get updates about the consortium is through the ChemCatBio newsletter called The Accelerator. I invite you to subscribe to this newsletter, if you're not already, to learn about ChemCatBio events, funding opportunities, publications, research projects, research teams, and a whole lot more.

Today, you will be in listen-only mode during the webinar. You can select audio connection options to listen to your computer audio or you can dial in through your phone. You may submit questions for our panelists today using the Q&A panel. If you are currently in full-screen view, click the question mark icon located on the floating toolbar at the lower right side of your screen. If you're in split screen mode, that Q&A panel is already open and is also located at the lower right side of your screen. To submit your question, simply select "All Panelists" in the Q&A dropdown menu. Type in your question. And press "Enter" on your keyboard. You may send in those questions at any time during the presentations. We will collect these, and time permitting, address them during the Q&A session at the end.

So if you have technical difficulties or just need help during today's session, I want to direct your attention to the chat section. The chat section is different from the Q&A panel and appears as a comment bubble in your control panel. Your questions or comments in the chat section only come to me, so please be sure to use that Q&A panel I just talked about for content questions for our panelists. Automated closed captioning is available for the event today. To turn it on, select "Show Closed Captions" at the lower left side of your screen. We are also recording this webinar. It will be posted on the ChemCatBio website in the coming weeks along with these slides. Please see the URL provided on the screen here.
All right, just a quick disclaimer I'm going to read. This webinar, including all audio and images of participants and presentation materials, may be recorded, saved, edited, distributed, used internally, posted on the US Department of Energy's website, or otherwise made publicly available. If you continue to access this webinar and provide such audio or image content, you consent to such use by or on behalf of DOE and the government for government purposes and acknowledge you will not inspect or approve, or be compensated for, such use.

All right. On to our speakers for today. We have four speakers for you. The first is John West, who is the digitalization implementation lead for catalyst technologies at Johnson Matthey. Additionally, he is also their thermal analysis expert with nearly 30 years of experience. Thanks for joining us, John. We have for you Neil Schweitzer, who is a research associate professor of chemical and biological engineering and the operations director of the reactor engineering and catalyst testing, or REACT, core facility at Northwestern University. Thanks for joining us, Neil. Robert M. Rioux is the Fredrich G. Helfferich professor of chemical engineering and professor of chemistry at the Pennsylvania State University. He received his Ph.D. in physical chemistry from the University of California, Berkeley, in 2006. His group's current research focuses on the development of multi-metallic catalysts with well-defined active site nuclearity and composition and the development of solution calorimetric techniques to understand catalytic processes at the solid-liquid interface. Thanks for joining us, Rob. And last but certainly not least is Rajamani Gounder, who is the R. Norris and Eleanor Shreve professor of chemical engineering at Purdue University. And he serves as director of the Purdue Catalysis Center. His research group studies the kinetic and mechanistic details of catalytic reactions and the structure and behavior of active sites in solid catalytic surfaces.

All right. With that, I would like to turn things over to John West to kick things off for us today. So, John, please take it away.

John West, Johnson Matthey: Thank you, Erik. I was very excited at the opportunity to introduce this webinar because I do think reliable data is key for industry. But before I go into more about that, I'd like to briefly introduce Johnson Matthey. We've been around for over 200 years now. And we're a leader in multiple markets. And now as you can see, we have over $4 billion in sales each year and over 12,000 employees.

Next slide, please. So we're involved in a lot of different industries now from green hydrogen and blue hydrogen. We still sell an awful lot of catalytic converters. And now we take municipal waste and convert it into aviation fuel for cleaner flights. But if you look on the left-hand side of this timeline, you can see our routes are very much founded in reliable numbers. The company actually started as a metal assayer, confirming the purity of different metals, particularly gold at the time. And you can also see in 1874, we were actually responsible for making the platinum-iridium kilogram standard weight that was used until relatively recently. Next slide, please.

I've not seen the next slide. Thank you. So a lot of you are hopefully familiar with this depiction of accuracy and precision. So the left-hand side, you can see you've got a whole spread of dots. And as you get more accurate, those dots, if you take enough of those measurements, would actually hit the bull's eye. Well, if you go along the lower section, you can see that the data gets tighter as you get more and more precise. But in the ideal world, you'd be in the top right corner there having high accuracy numbers and high precision numbers. So you don't have to take too many measurements to be confident of your actual results. Now, I don't know if any of you have actually considered what this actually means in money terms. Can I have the next slide, please.

So I've done a hypothetical case here. So on the right now, we have the equivalent of just one target, one big target. And if you were selling a Rhodium catalyst back in late 2020, you can see the price was $3,000 per troy ounce. Now, you could be selling a catalysts with a Rhodium loading of 0.38%. That's effectively 3 metric tons of it, 12 drums of catalyst. Now, this would fit on three pallets. But your Rhodium value on those pallets would be 8 and 1/2 million if you had the correct loading.

But if you're in accuracy of your numbers were measuring—no, say you're on a lower accuracy there, you've only measured your 0.3 loading as 0.285% or your confidence intervals were plus or minus 0.2%, you can see the swing you can get in the actual monetary value of what you're saying is actually in those drums can vary widely. And no business could survive with numbers like this. You could improve the situation by taking multiple repeats. But no, you're either going to make your business unprofitable or your customer unhappy and therefore lose all your customers.

So it's very important that you have reliable numbers. And this doesn't apply just to metal loading. This applies to everything we do. Be it catalyst activity, or the activity of the catalyst, or even the characterization of your data, it needs to be reproducible and reliable. And at the moment in the literature, it's not always certain that you can take those numbers and compare them against what you do. We spend an awful lot of time in industry making sure our numbers are reliable.

I very much have a background in thermal analysis. I can't just simply look at, say, a temperature programmed reduction in the literature and compare it directly with what I've measured. Very rarely there are enough information about how reproducible it is, the temperature being confirmed, and even detailed enough about the experimental conditions used. So that's my part. I'd like to now pass it over to Neil for the next section.

Neil Schweitzer, Northwestern University: Here we go. Can you hear me? OK.

John: Yeah, we can.

Neil: Thanks for that industrial perspective, John. What I wanted to do is give a perspective from the academic researcher side and introduce a little bit of the work that Raj, Rob, and I have been doing in this field to help make waves. So I think, in my opinion, science has a public perception problem right now just in society in general. And this is somewhat backed up by evidence. What I'm showing here is a study from the Pew Research Center asking U.S. adults what type of effect science has on society. And you can see over the past nine years or so, the general public thinks that science is having less and less of a positive impact. And this actually goes across political demographics as well. The numbers are different, but the trends are the same.

And so what drives this kind of mistrust in science? And so I think an obvious answer, in my opinion, is the influence of political groups and lobbies. But I think also there's just a public misunderstanding of what the scientific method is. And there's constantly reports out that or articles out that a report has come out to show something is not true or something. And so there's a lot of media coverage and reproducibility issues that I think contributes at least to this public perception problem. Next.

And so if you're not aware, we are in a reproducibility crisis according to a lot of people. And this crisis in science has started or maybe it was magnified maybe about 20 years ago with some of these journal articles that I'm highlighting here and newspaper articles that highlight different failings in the reproducibility of findings and science.

Next. There's also been books that have been published on this topic about how science has failed and how it's wasting millions and billions of government dollars. Next. And if you even want to get caught up on some of this, there's even a Wikipedia page on this replication crisis in science just to make it official that we are in one. Now, this is public perception. This isn't necessarily the perception of us—the scientists within science.

But there is also evidence, if you go to the next slide, that scientists feel this way too. And so this is the results of a Nature survey from about 10 or 20 years ago. And I just want to highlight some of the results here. So this is 1,500 scientists. Only about 100 of them were chemists. But in this first plot on the left, you can see people's opinion of how much work and how much of the literature in our field is reproducible. And if you look at the blue bars and where the red line is at 50%, you can estimate that maybe about 25% of total respondents think that half of the literature is unreliable and not reproducible. And if you include some of the bars above it, you can come to the conclusion that about half of respondents think that at least a quarter of the literature is irreproducible, which is startling. And this chart on the right is further startling and also got some headlines. About 90% of the respondents say they have tried to replicate experiments in the literature and failed to do so. And a large portion of them even failed to reproduce their own. So next.

I mean, this is admittedly a small sample size. It's only 100 chemists. And an ACS meeting has 20,000 chemists, so this is clearly a small sample size. And it's self-selected participants. But that being said, I think the conclusions here are pretty consistent with conversations I have had with many researchers in our field and that there is a general mistrust in a lot of the data that's reported in literature. And so I think this is a problem that we want to help address. So next slide.

But why is reproducibility in literature important? And I would make the argument there's many reasons. But one specific reason that's practically useful is that research communities should have some sort of reasonable expectation of what the normal variance of the measurements we're doing, what that normal variance is. So if you go to the next and one more. This is an example of an inner lab study that was done and published in ACS Central Science. So inner lab studies are just several groups trying to verify the same information on the same samples. And these are standard practice. This is how NIST operates. This is how organizations like ASTM operate. In this particular study, you can see in the box to the left, what they were trying to understand is they empirically observed that in most journals, an elemental analysis had to be accurate, 0.4% within the theoretical values.

So to the right here where I'm showing this one compound, you can calculate what the mass should be. And so if you do an analytical analysis of the composition of carbon, hydrogen, and nitrogen in that molecule, a sample of your own molecule, journal requirements are that your analysis needs to be within 0.4% for that to be considered a pure compound. And so what these authors did is they just took this one compound. And they actually took four or five different ones. And they sent them to about a dozen commercial labs to just have the elemental analysis analyzed.

And you can see the results on the right here. A value of zero means that the measurement matched the theoretical value of the weight percent exactly. But you can see the data here in this bar plot scatters quite a bit from commercial labs that do this professionally. And there's different conclusions you can come here and that in hindsight maybe are obvious, like the hydrogen results, because it's a light atom, have a very tight variance around zero, while carbon and nitrogen results have a very wide variance.
And you can see the data points there outside the bars that are all the outliers from the averages of the rest of them. And so this comes to question, is this 0.4% reasonable or not for journal submissions and for the community in general? Next slide.

So in our field, specifically in heterogeneous catalysis, I think there are reasonable questions to be asked of, what is a reasonable variance? If I measure the rate of a catalyst and someone else measures the rate, what is a reasonable variance if we do everything exactly the same? And I don't think there's an obvious answer. I don't think our—I think our field is behind in trying to answer those type of questions. There's ample papers out there that are reviews reporting data. But there are much fewer papers out there that actually analyze that data from the reviews to answer these type of questions. This is one example from a review that's almost 30 years old now. But in this paper, they were looking at different materials. And so on the plot on the left, this is for ethylene hydrogenation on several different forms of platinum.

And you can see when you compare—when you do an erroneous plot and do the turnover rate, which is the rate relative to a certain measured active site, across a wide range of temperatures and materials, you get a pretty consistent value for what you expect the rate to be maybe within an order of magnitude.
And so this might be a case where the structure-insensitive reaction, what we can learn is maybe sample history is less important when it comes to the reproducibility of these type of measurements, or when you look at a plot on the right, which is a similar plot—this is turnover rates for a bunch of different platinum catalysts for propane hydrogenolysis, which is known to be structure-sensitive.

If you plot the rate as a function of particle diameter, I mean, you get a wide variance, something that we would assume is just not reproducible. Where the sample history is playing a role or maybe the method that these are tested under is playing a role, it's not clear, but I think it's a reasonable question is, should we expect the same variance on a rate measurement for these two different materials or maybe the natural variance is much larger? It's hard to tell. Next slide.

And an important point here is that there are questions about how much variance should there be in a particular measurement before we even consider the competency of who's running the experiments or the rigor of the experiment themselves. So there's a lot of questions to be answered. Next slide.

And so why isn't rigor and reproducibility a priority in science? And at least part of the answer is the way that the research ecosystem exists today. So there's three different groups in this ecosystem. There's the researchers, there's the publishers, and there's funders. And we all have altruistic goals. Researchers want to conduct impactful science. And publishers want to disseminate that science. And funders want to support that type of science. But within these goals, there's realistic constraints. So a researcher wants to do this science. But we also want to advance our career. Publishers need to maximize their revenue from the journals that they're supporting. And funders need to justify the decisions of who they're supporting and why. And so the way that we normally judge these type of these constraints is through the use of metrics. So you can go to the next slide.

So usually, a researcher submits for publications that are hopefully high impact. And then that reflects on the researcher and the quality of the researcher. And that helps funders justify their decisions. Now, it's difficult to read publications. We're all people with limited amounts of time. And we need easy solutions to judge science and to judge researchers and projects. And the way we do that is with these simple metrics, these simple citation and publication metrics. The problem is that these metrics incentivize—What Raj and Rob and I have been doing—it's been going on for maybe two years now. And it started by planning a workshop. And so we started with this knowledge of what we know that researchers have little confidence in the data. But catalysis is a complex science. And there's a lot of unknowns about inherent reproducibility of the measurements we make. And the current ecosystem does not incentivize trying to understand necessarily these variances and incentivizes publishing a lot and publishing novel work and publishing quickly.

And so our question was if we're going to plan a workshop and try to make an impact, how do we do this? And so our approach was to target two specific groups with the outcomes of this workshop. And so those groups are reviewers, which are really the first line of defense for R&R in our field, and then also new researchers in the field, who are coming to our field, who need to learn a lot of information and learn it in a small amount of time to be effective and successful in science. So with that, I'll pass it off to Rob here, who will tell you more about the workshop that we ran.

Robert Rioux, Pennsylvania State University: OK, great. Thank you very much, Neil. Next slide, please. So the workshop that was mentioned by Neil was planned by myself, Neil, and Raj. And what we ultimately did, rather than hold just a single workshop, we held a phased workshop. And so the first phase of the workshop which occurred on April 4 was a virtual webinar free to anybody who registered to attend to learn more about this problem of rigor and reproducibility. After phase 1, we convened phase 2. And that was an in-person meeting with about 70 participants coming from academia, industry, and national laboratories.

And the objective there was really to agree on standards for both methods testing of catalytic materials and benchmark materials. And so those are ultimately the two topics that were discussed over the two days in phase 2 and that ultimately led to a report that has been published recently. Phase 3 is one that is ongoing. And you can see the number of things that we intend to achieve to really answer the actionable items that came out of phase 2. Raj will talk more about that in the later half of this talk. Next slide, please.

So in phase 1, what we really wanted to do was pose these multiple questions, five questions, and then invite experts from inside the physical sciences and outside of the physical sciences to really help us understand the problem of rigor and reproducibility. So one of the things we wanted to identify was the breadth of the problem, how widespread is it? And we wanted to learn that by inviting people from other fields. And so one of our first talks came from Professor Tackett at Northwestern, who's a clinical psychologist and also an editor for a clinical psychology journal.

And she told us about the efforts that have been ongoing in psychology. And as Neil had mentioned in one of his slides, we may be a little bit behind the curve here on rigor and reproducibility issues in thermal heterogeneous catalysis. But the psychology community has been tackling this problem for quite a long time. And Professor Tackett was able to give us great insight into what her field is doing. Then we had people closer to home to our field of catalysis come in and tell us about work they have done at looking at metadata analysis in the absorption science to understand reproducibility trends within the literature.
We also had a speaker come from a journal where really the key component of this journal is rigor and reproducibility. Every synthesis that has to be—every synthesis published in that journal has to be reproduced before the method is accepted for publication.

And then we ended this one-day virtual webinar with a panel discussion with journal editors. How do journal editors see this issue of rigor and reproducibility? What are the actionable items that a journal is taking or may take to ensure that publications contain rigor and reproducible data? So it was a very—I mean, it was packed filled with information for all of us to learn about rigor and reproducibility and then ultimately led us to the phase 2 aspect of this workshop. Could you go to the next slide, please?

And so phase 2 was this two-day hybrid meeting. It happened in Rosemont, Illinois. Some people participated virtually. But most people were present in person. It was broken down essentially into two days. The first day, we really talked about standardized method reporting. So we all use a variety of techniques in catalytic science to characterize our materials, to characterize their rates. How do we report the data from these methods consistently? So that was the primary discussion on day one. On day two, we spent the day talking about benchmark materials. These are the class of materials on the right-hand side here on the bottom that we discussed in this first meeting. And ultimately from day one and day two, it led to the generation of a report. Next slide, please.

Here is the cover of the report actually designed by an NREL scientist, an NREL artist. And we addressed a number of—we summarize a number of the topics from day one and day two in this workshop report.

The DOI for that report can be seen below. But you can also see below, bottom right-hand side, is a URL for CatalysisRR website that contains a PDF copy of this report that you can download at any time. The report is large. But I'm going to focus just on one of the sections. And the section I'm going to focus on turns out to be section 5. And it's really best practices and recommendations for catalytic testing. And all the sections in the report essentially follow a common theme where in each section, we talk about common applications, maybe the known limitations of the method, of known limitations of materials that are out there and considered benchmarks for certain class of materials. And then we talk about—or we summarize some specific recommendations for how you should report your data in the literature and then provide references in each section for best practices. Next slide, please.

So here's just the table of contents. This is not meant for you to read the details. Like I said, you can download this report at the URL provided in the bottom right-hand corner. But essentially, we looked at recommendations for benchmarking materials and recommendations for methods reporting. Next slide, please.

So this is just a snapshot of what comes out of section 5 on catalyst testing. And you can see here on the left-hand side—and again, there's no intentions for you to be able to read this. But basically a flow chart that was put together by the authors of section 5 on catalyst testing on really how one should go about to ensure that the data you are receiving from your catalytic reactor is done rigorously and it can be reproduced by others because it also provides information on what types of data should be reported with the testing. And so you can see here on the right-hand side, it's just a quick many word summary of what are the topics that were covered in section 5. And so I'll just focus on the technical recommendations but really choosing the proper reactors important depending on the type of chemistry you're catalyzing, whether your reactor is operating isothermally and isomerically, and how do you test to ensure that you're under isothermal conditions?

Obviously, concentration gradients and reactors are critical to eliminate in order to report rigorous rates. And some recommendations on how you do that, how you test for that are given. Flow patterns, of course, are also important. And then what is also discussed here is actually pellet scale phenomena for catalytic testing, so not thinking just on the powder scale individual grain scale, thinking about using larger pellets. And then from section 5, some recommendations came out that can be seen down here. And I believe that is my last slide. And so I'll turn it over to Raj.
 
Rajamani Gounder, Purdue University: All right. Thanks, Rob. Next slide, please. So what I'll focus on to wrap things up today is phase 3 of this effort, so what is planning after that in-person workshop that Rob mentioned that happened in the summer of 2022. One of the activities that is coming out is the next version of this workshop-—we're going to call it v2.0—hopefully held in coordination with a major upcoming international meeting. But the idea is for the first workshop, we included about 60 to 70 researchers through different career stages, different institutions, different backgrounds to try to capture a diverse set of input.

We'd like to include different participants in the next workshop to be planned. We see this as an evergreen process—updates to the current report. We only have time to cover only a subset of what all the methods and materials are that are used in heterogeneous thermal catalysis. So, for example, here's a snapshot of table of contents. On the right, there's only a couple of classes of materials we have time to get into. There's understandably more. So those are the kinds of technical topics we'd like to provide and collect information on in the next version and to continually update it with time.

We also realized that thermal heterogeneous catalysis unto itself is a whole workshop. There are other important areas of catalysis, such as electric catalysis, homogeneous catalysis, machine learning. Those are actually three workshops that are being organized at the moment. We know some of the details already, including their co-organizers and locations and dates. But if you happen to be interested in these other three topics, keep an eye out for these workshops and more information over the coming months. Next slide, please.

One of the things that you'll also see coming out of the workshop report is, of course, that full length report is available to download and access. But we also are trying to translate that into shorter articles and guides that can be published on focus topics within that report. The first such effort is a series of articles that will be coming out in the Journal of Catalysis. And here's just a snapshot of one of the articles that's already available online. So you'll see a series of these, for example, coming out in the Journal of Catalysis on specific topics contained within the report. Next slide, please.

So finally, while we had about 60 or 70 of the workshop organizers and participants and speakers together in Rosemont, Illinois, for those two to three days in 2022, not only did we tackle the technical topics that are summarized in the report, we had many conversations structured around what are the challenges of trying to improve rigor and reproducibility in our field? And what are some other initiatives and activities that we might want to consider as a community to help improve this with time?

I'll talk about three of them in the following slides. But as we also have these discussions, as you can see in the bottom left, there were broader issues that came up in community adoption and incentivization. These are not things I'll talk about today. But they're contained in the report. If one were to implement any of these strategies, we needed to tackle issues around data storage, formatting, and accessibility, and also how this would be involved with writing journal publications, reviewing journal publications, and for research proposals. On the topic of benchmarking and data storage, I wanted to highlight there was an interesting speaker at the workshop from NREL that talked about this effort in a different scientific community.

So Nikos Kopidakis who leads the working group for cell and module performance measurement for photovoltaics at NREL has led this effort to build a database for the community for this type of measurement, where other labs and other research groups can actually send materials to a central facility, evaluate it against benchmark materials, and kept in a database for the entire community. So we talked about whether that would be a good idea or feasible for certain efforts in catalysis. Next slide, please.

In terms of future activities that we're planning, the first would be catalysis-focused interlaboratory studies. Neil mentioned a little bit about this and what it is. He mentioned the need for interlaboratory studies around just making measurements. There's also interlaboratory studies have been done about data analysis. So an example on the right is from the reference at the bottom of the box where BET adsorption isotherms, the same data set, were provided to 60 different research labs for 18 different materials. And it was just asked of them, how do you analyze the data to come up with the number? And there was a large variance, as you can see, in panel A of this plot. So the need not only to make materials, characterize them, but also analyze the data are all topics where ILS studies can help.

One of the things that the workshop identified is although there are some ILS efforts around maybe characterization or elemental analysis or other things like that, there are few such activities for reaction rate measurements. And so that's one example of where a new ILS study might fill a much-needed role in the landscape of ILS studies in science and in chemistry. For example, for biofeedstock processing that might typically involve a bifunctional material, having a standard sample or standard way to make a rate measurement might be useful to compare new formulations of a bifunctional catalyst. Of course, as we do this, the need to benchmark your own material or sample against a commonly accepted benchmark ideally would be done in a way that allows that model to be funded in a sustainable way and accessible to any researcher that might want to use it. Next slide, please.

Mechanisms, I think, are needed to make benchmark materials broadly accessible for the community to adopt them. There is one such effort that was initiated in the mid '70s, the Euro-PT catalyst from Johnson Matthey, which was a platinum on silica catalyst that was sent to many different research labs primarily in Europe to standardize characterization measurements. One reason for the end of that effort was just unavailability of a material. With time, the original sample ran out. So, of course, this kind of thing is challenging to produce by one source. And so one of the things that the workshop participants talked about was whether or not it would be better to standardize a recipe for a material that independently labs could make on their own, characterize, and then upload both the characterization data and reaction data on that material to maybe a crowdsourced database so that organically such a material could evolve from natural activities from within the community. We also thought that that would provide a good opportunity for training new researchers entering the field or a new student joining a research group, for example. Next slide, please.

Finally, producing training videos and learning modules would be a future activity that members of the group are going to start to put together. This would really be tips and tricks on ways you can perform a measurement or make a materials for improvements reproducibility. And this is really meant to target new researchers, such as new graduate students, or undergraduate students, or postdocs entering the field, or maybe it's a scientist from a different field, maybe material science, that's coming into the field of catalysis and done in a way that's accessible to the most diverse set of researchers to try to bring in other scientific expertise into the field of catalysis. So that's the third major activity that is coming out of this effort. Next slide, please.

I think to conclude, as Neil mentioned and Rob mentioned, there is this website. CatalysisRR.org, that has the latest information. It has the workshop report but also a lot of helpful literature, references, discussion forum, and other things. So that's the central hub for all the information related to this effort. And next slide, please.

I think we wanted to acknowledge all of the individuals that helped with the workshop that Rob mentioned, in-person workshop committee members and organizers at the top, workshop participants in the middle, reviewers of the content at the bottom. And then highlighted in yellow are the early-career researchers, students in postdocs who provided valuable input and perspectives to all aspects of the report. And I think I'll end there by advancing the next slide and turning it back over to Erik to lead the next part of the workshop that we have today.

Erik: Awesome. Well, thanks Raj. And thanks to each of you for those interesting and informative presentations. A lot was covered. And we do have time for a few questions to get more information about what they talked about. So just as a reminder, you can use the Q&A box to submit your questions to the group. And also available on the call is Frederick Baddour with the National Renewable Energy Laboratory and ChemCatBio if there's a question specific to the consortium. But while the questions keep coming in, I'll kick things off to the group. There's been a lot of talk about rigor and reproducibility today. But I have the question of why might benchmarking be important for this kind of work as well?

Neil: Yeah. I think the important thing is the combination of rigor reproducibility and benchmarking. As an individual researcher in a lab making measurements, it's not enough for you to be able to make the same measurement repeatedly over and over and over and over again. A good example is mass transport limitations that disguise intrinsic rates. I mean, that's a true phenomenon that you can repeatedly do over and over and over again or if there's heat transfer issues in your reactor setup coming from your furnace, for example, or something like that. Those are repeatable. And your goal is to get the intrinsic rate. But those influences are repeatable.

And so the reason benchmarking is so important in combination with rigor and reproducibility is so that you have some sort of external value that you can compare to when you're making your own measurements to make sure that you're reaching the—you're getting the values that you're really searching for like intrinsic rate values, that you're able to make those measurements.

John: The other thing I'd say about benchmarking is that once you've managed to get things benchmarked, you then actually have the opportunity to combine your data sets to create something new, which just would not be possible if you're not sure that things are aligned.

Rajamani: Yeah, I can add—I think when you're running a research group or a research organization or outfit, I think whenever you bring new researchers in or new team members in, the ability for them to benchmark a measurement against legacy data sets is important to make sure that the data can connect through different teams and different people that are working on the project. And whenever you're starting a new project and a new chemistry, oftentimes the early stage is to look for a literature reference. And then the first thing you do in the research project is to see if you can reproduce or benchmark the material and measurement against what would be published. And that is usually done with different degrees of success. And then a lot of time and effort is usually spent to try to figure out what was the difference. And so it can help on both of those fronts.

Robert: Yeah, I would just input, I mean, having a benchmark material for any chemistry is important because academics are often consumed with stating that they've got the most active, the best catalyst ever. And at least here now, if you had the ability to compare against a benchmark, you'd have a real good point of reference or standard to compare against.

Erik: Great. Another question. What is the biggest single step one might take to increase their reliability and reproducibility when conducting catalysis research? Single biggest step they can take.

John: Doing everything consistently. So very much document what you're doing, going to do. And do the same thing every time. And make sure you've got—catalyst is loaded into the same position, you're using the same size fraction, thermocouples are in the same position. Consistency goes a long way and allows you to find, when you do start benchmarking, what the actual differences are between your equipment rather than just because you've got natural variance in your test.

Neil: And another thing I advise students on is to design experiments that have slight differences. And make sure that whatever difference that is is predictable and makes the—and has the effect that you want. An obvious or good case is, again, with mass transport limitations. If you're measuring a certain amount of catalyst, add less catalyst next time. And maybe adjust the flows. And make sure it has the expected effect so that you can build more confidence in the values that you're getting.

Rajamani: Yeah, I think I'll maybe echo what John said. I think early on we had discussions about would it make more sense to improve rigor and reproducibility by, let's say, coming up with a prescriptive protocol on how to do something? And that assumes you know what that prescription is. More likely is just being more attentive to detail and precise on what did you do in the experiment so that you can identify what led to maybe differences between group A and group B. I think in many discussions we've had simple things like catalyst storage or catalyst pretreatment. And all of these details end up being the lurking variable that causes the rate to be very different. And so as long as you can go back and accurately find out what was done in different experiments, that lets you then improve R&R on your own individual basis.

Robert: Yeah, I guess I would just add, to reproduce takes time, so we need to be patient, especially someone like myself who's advising students and hoping that they produce rigorous data and it's reproducible as to recognize that that takes time.

Erik: Great discussion. A couple more questions. How are benchmark catalysts or recipes chosen? Will there be a different benchmark catalyst for every reaction?

Neil: I think there has to be. I mean, we know already that there's no one catalyst that does everything. And so I think there would have to be specific benchmarks not only for different chemistries but also perhaps the same chemistry but different material types. If you think of meta catalysis versus oxide catalysis and something that can do similar chemistry, you may still need different benchmarks for those two types of materials because they can have drastically different catalytic properties, the different material classes.

John: Of course, one of the key things if you're applying standards is making sure the material—the key things about standards is obviously we need to make sure the material is very stable and stable in storage.

Erik: Are there ways to move journals to maybe only accept submissions with reactivity reported from benchmark systems? Or should journals even do that?

Neil: I think that's a good question. I mean, in developing our workshop, we had a lot of conversations about including journals and journal editors and how to influence what journals are doing. But that was one of the reasons that, in my opinion, that we needed to target reviewers specifically. It's because I think reviewers—journals have that power but reviewers do too. And so as a community, if we can build momentum together to all try to enforce maybe the use of benchmarks but also enforce standards for how data is reported and how experiments are performed as reviewers, we can have an impact on that too.

Rajamani: I think the more general idea related to that comment is what can a journal do to improve R&R. And I think one could be if the journal had the desire and mechanism to do it to ask that a benchmark measurement or performed alongside the new catalyst or the new material. At least another route to improve R&R is to require that the author provides the primary data set used, literally the gas chromatography data, the primary data used to calculate the reaction rate so that down the road, retroactively, data and results can be analyzed and decomposed to figure out maybe what are the origins of differences. So I think some mechanism like that could be useful for some journal to be able to implement down the road.

Robert: Yeah, just to follow up on what Raj said because I think it's really important, when we had our virtual workshop, John Kitchin from Carnegie Mellon gave a talk and basically demonstrated for us how you can easily embed your raw data at various levels, whether it's gas chromatograph traces to Excel spreadsheets you use to calculate rates, very easily into the paper. And a lot of that can even be achieved without even the journal knowing or the journal requiring it. And if it were to become a requirement from journals, John and others have really laid out a framework for the inexperienced to easily be able to do that.

Neil: Yeah, just to comment on that, we talked a lot about, should there be a database, a database of material? And John's comment was the literature should be our database. And we should be able to extract that data from it and get the data that we need for these type of meta-analysis studies.

Erik: Lots of interesting discussions. And I think we could probably continue this conversation for a while. But unfortunately, we're over time here. So I think that's all the time we have for questions today. But I wanted to thank, again, for everyone who joined today. And a special thanks to our four panelists—John, Neil, Rob, and Raj—for sharing these insights on rigor and reproducibility.

Just a quick reminder that a recording of this presentation will be posted to the webinar section of the ChemCatBio website as soon as it is available. If your question didn't get answered, we encourage you to contact any of our panelists. Their emails are on the screen here today, or you can contact ChemCatBio through our website, which is, again, ChemCatBio.org. And I'd just like to make one last plug for the ChemCatBio newsletter called The Accelerator—just a great resource to keep tabs on any further updates or events from the consortium like this one today. But with that, I think we will take our leave. Have a great rest of your day. And remember to stay tuned for future ChemCatBio webinars. Thanks, everyone.

Neil: Thanks.